DOMAIN: Electronics and Telecommunication • CONTEXT: A communications equipment manufacturing company has a product which is responsible for emitting informative signals. Company wants to build a machine learning model which can help the company to predict the equipment’s signal quality using various parameters. • DATA DESCRIPTION: The data set contains information on various signal tests performed:
from google.colab import drive
drive.mount('/content/drive')
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
pip install keras-tuner
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Requirement already satisfied: keras-tuner in /usr/local/lib/python3.7/dist-packages (1.1.3) Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from keras-tuner) (1.21.6) Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from keras-tuner) (2.23.0) Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from keras-tuner) (21.3) Requirement already satisfied: tensorboard in /usr/local/lib/python3.7/dist-packages (from keras-tuner) (2.8.0) Requirement already satisfied: ipython in /usr/local/lib/python3.7/dist-packages (from keras-tuner) (5.5.0) Requirement already satisfied: kt-legacy in /usr/local/lib/python3.7/dist-packages (from keras-tuner) (1.0.4) Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner) (4.4.2) Requirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner) (2.6.1) Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner) (57.4.0) Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner) (1.0.18) Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner) (0.8.1) Requirement already satisfied: pexpect in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner) (4.8.0) Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner) (0.7.5) Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner) (5.1.1) Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython->keras-tuner) (1.15.0) Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython->keras-tuner) (0.2.5) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->keras-tuner) (3.0.9) Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect->ipython->keras-tuner) (0.7.0) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->keras-tuner) (2.10) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->keras-tuner) (3.0.4) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->keras-tuner) (1.24.3) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->keras-tuner) (2022.6.15) Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner) (1.8.1) Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner) (1.2.0) Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner) (3.4.1) Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner) (0.37.1) Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner) (1.47.0) Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner) (1.0.1) Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner) (1.35.0) Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner) (0.6.1) Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner) (0.4.6) Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner) (3.17.3) Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner) (4.9) Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner) (0.2.8) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner) (4.2.4) Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->keras-tuner) (1.3.1) Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard->keras-tuner) (4.12.0) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard->keras-tuner) (3.8.1) Requirement already satisfied: typing-extensions>=3.6.4 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard->keras-tuner) (4.1.1) Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard->keras-tuner) (0.4.8) Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->keras-tuner) (3.2.0)
import pandas as pd
import numpy as np
from tensorflow.keras.layers import Dense, BatchNormalization, Dropout,LeakyReLU
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
from tensorflow.keras.losses import binary_crossentropy,BinaryCrossentropy, SparseCategoricalCrossentropy, CategoricalCrossentropy,categorical_crossentropy,sparse_categorical_crossentropy
from tensorflow.keras.optimizers import Adam
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, StandardScaler,LabelEncoder
from tensorflow.keras.utils import to_categorical,plot_model
from tensorflow.keras.metrics import AUC
from sklearn.metrics import confusion_matrix, classification_report
import warnings
warnings.filterwarnings('ignore')
from kerastuner import Hyperband
import kerastuner as kerastuner
signal_df = pd.read_csv('/content/drive/My Drive/Colab Notebooks/DeepLearning/Assignment/NN Project Data - Signal.csv')
signal_df.head(5)
| Parameter 1 | Parameter 2 | Parameter 3 | Parameter 4 | Parameter 5 | Parameter 6 | Parameter 7 | Parameter 8 | Parameter 9 | Parameter 10 | Parameter 11 | Signal_Strength | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 7.4 | 0.70 | 0.00 | 1.9 | 0.076 | 11.0 | 34.0 | 0.9978 | 3.51 | 0.56 | 9.4 | 5 |
| 1 | 7.8 | 0.88 | 0.00 | 2.6 | 0.098 | 25.0 | 67.0 | 0.9968 | 3.20 | 0.68 | 9.8 | 5 |
| 2 | 7.8 | 0.76 | 0.04 | 2.3 | 0.092 | 15.0 | 54.0 | 0.9970 | 3.26 | 0.65 | 9.8 | 5 |
| 3 | 11.2 | 0.28 | 0.56 | 1.9 | 0.075 | 17.0 | 60.0 | 0.9980 | 3.16 | 0.58 | 9.8 | 6 |
| 4 | 7.4 | 0.70 | 0.00 | 1.9 | 0.076 | 11.0 | 34.0 | 0.9978 | 3.51 | 0.56 | 9.4 | 5 |
### checking the df using info
signal_df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1599 entries, 0 to 1598 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Parameter 1 1599 non-null float64 1 Parameter 2 1599 non-null float64 2 Parameter 3 1599 non-null float64 3 Parameter 4 1599 non-null float64 4 Parameter 5 1599 non-null float64 5 Parameter 6 1599 non-null float64 6 Parameter 7 1599 non-null float64 7 Parameter 8 1599 non-null float64 8 Parameter 9 1599 non-null float64 9 Parameter 10 1599 non-null float64 10 Parameter 11 1599 non-null float64 11 Signal_Strength 1599 non-null int64 dtypes: float64(11), int64(1) memory usage: 150.0 KB
signal_df.isna().sum()
Parameter 1 0 Parameter 2 0 Parameter 3 0 Parameter 4 0 Parameter 5 0 Parameter 6 0 Parameter 7 0 Parameter 8 0 Parameter 9 0 Parameter 10 0 Parameter 11 0 Signal_Strength 0 dtype: int64
B. Check for missing values and print percentage for each attribute. [2 Marks]
print("missing value of each feature in percentage")
signal_df.isna().sum()/signal_df.isna().count()*100
missing value of each feature in percentage
Parameter 1 0.0 Parameter 2 0.0 Parameter 3 0.0 Parameter 4 0.0 Parameter 5 0.0 Parameter 6 0.0 Parameter 7 0.0 Parameter 8 0.0 Parameter 9 0.0 Parameter 10 0.0 Parameter 11 0.0 Signal_Strength 0.0 dtype: float64
C. Check for presence of duplicate records in the dataset and impute with appropriate method. [2 Marks]
print("Is there any duplicate records present in the dataframe? ", signal_df.duplicated().any())
print("Count the number of duplicate records present in the dataframe? ", signal_df.duplicated().sum())
Is there any duplicate records present in the dataframe? True Count the number of duplicate records present in the dataframe? 240
print("Printing all the duplicate record in the signal dataframe")
duplicate = signal_df[signal_df.duplicated()]
duplicate
Printing all the duplicate record in the signal dataframe
| Parameter 1 | Parameter 2 | Parameter 3 | Parameter 4 | Parameter 5 | Parameter 6 | Parameter 7 | Parameter 8 | Parameter 9 | Parameter 10 | Parameter 11 | Signal_Strength | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 4 | 7.4 | 0.700 | 0.00 | 1.90 | 0.076 | 11.0 | 34.0 | 0.99780 | 3.51 | 0.56 | 9.4 | 5 |
| 11 | 7.5 | 0.500 | 0.36 | 6.10 | 0.071 | 17.0 | 102.0 | 0.99780 | 3.35 | 0.80 | 10.5 | 5 |
| 27 | 7.9 | 0.430 | 0.21 | 1.60 | 0.106 | 10.0 | 37.0 | 0.99660 | 3.17 | 0.91 | 9.5 | 5 |
| 40 | 7.3 | 0.450 | 0.36 | 5.90 | 0.074 | 12.0 | 87.0 | 0.99780 | 3.33 | 0.83 | 10.5 | 5 |
| 65 | 7.2 | 0.725 | 0.05 | 4.65 | 0.086 | 4.0 | 11.0 | 0.99620 | 3.41 | 0.39 | 10.9 | 5 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 1563 | 7.2 | 0.695 | 0.13 | 2.00 | 0.076 | 12.0 | 20.0 | 0.99546 | 3.29 | 0.54 | 10.1 | 5 |
| 1564 | 7.2 | 0.695 | 0.13 | 2.00 | 0.076 | 12.0 | 20.0 | 0.99546 | 3.29 | 0.54 | 10.1 | 5 |
| 1567 | 7.2 | 0.695 | 0.13 | 2.00 | 0.076 | 12.0 | 20.0 | 0.99546 | 3.29 | 0.54 | 10.1 | 5 |
| 1581 | 6.2 | 0.560 | 0.09 | 1.70 | 0.053 | 24.0 | 32.0 | 0.99402 | 3.54 | 0.60 | 11.3 | 5 |
| 1596 | 6.3 | 0.510 | 0.13 | 2.30 | 0.076 | 29.0 | 40.0 | 0.99574 | 3.42 | 0.75 | 11.0 | 6 |
240 rows × 12 columns
### Remove the duplicate element from the dataframe
signal_df.drop_duplicates(inplace=True, ignore_index=True)
print("Is there any duplicate records left after dropping? ", signal_df.duplicated().sum())
Is there any duplicate records left after dropping? 0
signal_df.describe()
| Parameter 1 | Parameter 2 | Parameter 3 | Parameter 4 | Parameter 5 | Parameter 6 | Parameter 7 | Parameter 8 | Parameter 9 | Parameter 10 | Parameter 11 | Signal_Strength | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| count | 1359.000000 | 1359.000000 | 1359.000000 | 1359.000000 | 1359.000000 | 1359.000000 | 1359.000000 | 1359.000000 | 1359.000000 | 1359.000000 | 1359.000000 | 1359.000000 |
| mean | 8.310596 | 0.529478 | 0.272333 | 2.523400 | 0.088124 | 15.893304 | 46.825975 | 0.996709 | 3.309787 | 0.658705 | 10.432315 | 5.623252 |
| std | 1.736990 | 0.183031 | 0.195537 | 1.352314 | 0.049377 | 10.447270 | 33.408946 | 0.001869 | 0.155036 | 0.170667 | 1.082065 | 0.823578 |
| min | 4.600000 | 0.120000 | 0.000000 | 0.900000 | 0.012000 | 1.000000 | 6.000000 | 0.990070 | 2.740000 | 0.330000 | 8.400000 | 3.000000 |
| 25% | 7.100000 | 0.390000 | 0.090000 | 1.900000 | 0.070000 | 7.000000 | 22.000000 | 0.995600 | 3.210000 | 0.550000 | 9.500000 | 5.000000 |
| 50% | 7.900000 | 0.520000 | 0.260000 | 2.200000 | 0.079000 | 14.000000 | 38.000000 | 0.996700 | 3.310000 | 0.620000 | 10.200000 | 6.000000 |
| 75% | 9.200000 | 0.640000 | 0.430000 | 2.600000 | 0.091000 | 21.000000 | 63.000000 | 0.997820 | 3.400000 | 0.730000 | 11.100000 | 6.000000 |
| max | 15.900000 | 1.580000 | 1.000000 | 15.500000 | 0.611000 | 72.000000 | 289.000000 | 1.003690 | 4.010000 | 2.000000 | 14.900000 | 8.000000 |
signal_df.nunique()
Parameter 1 96 Parameter 2 143 Parameter 3 80 Parameter 4 91 Parameter 5 153 Parameter 6 60 Parameter 7 144 Parameter 8 436 Parameter 9 89 Parameter 10 96 Parameter 11 65 Signal_Strength 6 dtype: int64
Univariant Analysis
# Function to plot a boxplot and a histogram along the same scale.
def histogram_boxplot(data, feature, figsize=(12, 7), kde=False, bins=None):
"""
Boxplot and histogram combined
data: dataframe
feature: dataframe column
figsize: size of figure (default (12,7))
kde: whether to the show density curve (default False)
bins: number of bins for histogram (default None)
"""
f2, (ax_box2, ax_hist2) = plt.subplots(
nrows=2, # Number of rows of the subplot grid= 2
sharex=True, # x-axis will be shared among all subplots
gridspec_kw={"height_ratios": (0.25, 0.75)},
figsize=figsize,
) # creating the 2 subplots
sns.boxplot(
data=data, x=feature, ax=ax_box2, showmeans=True, color="violet"
) # boxplot will be created and a star will indicate the mean value of the column
sns.histplot(
data=data, x=feature, kde=kde, ax=ax_hist2, bins=bins, palette="winter"
) if bins else sns.histplot(
data=data, x=feature, kde=kde, ax=ax_hist2
) # For histogram
ax_hist2.axvline(
data[feature].mean(), color="green", linestyle="--", label='Mean'
) # Add mean to the histogram
ax_hist2.axvline(
data[feature].median(), color="black", linestyle="-", label='Median'
) # Add median to the histogram
ax_hist2.legend()
histogram_boxplot(data= signal_df, feature="Parameter 1", kde=True)
# There are many outliers in parameter 1 .
# Mean of Parameter 1 is greater than Median. Plot is rightly Skewed
# Maximum number of element are near about 7.
histogram_boxplot(data= signal_df, feature="Parameter 2", kde=True)
# There are some outliers in Parameter 2.
# Mean and Median are almost equal
# More number of element are near about 0.4 and 0.6.
# Data is rightly distributed.
histogram_boxplot(data= signal_df, feature="Parameter 3", kde=True)
There is one outliers present.
Mean and Median are almost near about.
Most of the records are less than 0.05
histogram_boxplot(data= signal_df, feature="Parameter 4", kde=True)
histogram_boxplot(data= signal_df, feature="Parameter 5", kde=True)
histogram_boxplot(data= signal_df, feature="Parameter 6", kde=True)
histogram_boxplot(data= signal_df, feature="Parameter 7", kde=True)
histogram_boxplot(data= signal_df, feature="Parameter 8", kde=True)
histogram_boxplot(data= signal_df, feature="Parameter 9", kde=True)
Outliers are present at both end
Plot is rightly skewed because of the outliers
histogram_boxplot(data= signal_df, feature="Parameter 10", kde=True)
histogram_boxplot(data= signal_df, feature="Parameter 11", kde=True)
Bi-Variant Analysis
### Function to plot distributions
def distribution_plot_wrt_target(data, predictor, target):
fig, axs = plt.subplots(1, 8, figsize=(25, 6))
target_uniq = np.sort(data[target].unique())
axs[0].set_title("Distribution of target for target=" + str(target_uniq[0]))
sns.histplot(
data=data[data[target] == target_uniq[0]],
x=predictor,
kde=True,
ax=axs[0],
color="teal",
)
axs[1].set_title("Distribution of target for target=" + str(target_uniq[1]))
sns.histplot(
data=data[data[target] == target_uniq[1]],
x=predictor,
kde=True,
ax=axs[1],
color="orange",
)
axs[2].set_title("Distribution of target for target=" + str(target_uniq[2]))
sns.histplot(
data=data[data[target] == target_uniq[2]],
x=predictor,
kde=True,
ax=axs[2],
color="orange",
)
axs[3].set_title("Distribution of target for target=" + str(target_uniq[3]))
sns.histplot(
data=data[data[target] == target_uniq[2]],
x=predictor,
kde=True,
ax=axs[3],
color="orange",
)
axs[4].set_title("Distribution of target for target=" + str(target_uniq[4]))
sns.histplot(
data=data[data[target] == target_uniq[2]],
x=predictor,
kde=True,
ax=axs[4],
color="orange",
)
axs[5].set_title("Distribution of target for target=" + str(target_uniq[5]))
sns.histplot(
data=data[data[target] == target_uniq[2]],
x=predictor,
kde=True,
ax=axs[5],
color="orange",
)
axs[6].set_title("Boxplot w.r.t target")
sns.boxplot(data=data, x=target, y=predictor, ax=axs[6], palette="gist_rainbow")
axs[7].set_title("Boxplot (without outliers) w.r.t target")
sns.boxplot(
data=data,
x=target,
y=predictor,
ax=axs[7],
showfliers=False,
palette="gist_rainbow",
)
plt.tight_layout()
plt.show()
distribution_plot_wrt_target(signal_df, 'Parameter 1', "Signal_Strength")
distribution_plot_wrt_target(signal_df, 'Parameter 2', "Signal_Strength")
distribution_plot_wrt_target(signal_df, 'Parameter 3', "Signal_Strength")
distribution_plot_wrt_target(signal_df, 'Parameter 4', "Signal_Strength")
distribution_plot_wrt_target(signal_df, 'Parameter 5', "Signal_Strength")
distribution_plot_wrt_target(signal_df, 'Parameter 6', "Signal_Strength")
distribution_plot_wrt_target(signal_df, 'Parameter 7', "Signal_Strength")
distribution_plot_wrt_target(signal_df, 'Parameter 8', "Signal_Strength")
distribution_plot_wrt_target(signal_df, 'Parameter 9', "Signal_Strength")
distribution_plot_wrt_target(signal_df, 'Parameter 10', "Signal_Strength")
distribution_plot_wrt_target(signal_df, 'Parameter 11', "Signal_Strength")
Multivariant Analysis
sns.pairplot(data=signal_df, hue='Signal_Strength')
<seaborn.axisgrid.PairGrid at 0x7f78446bc210>
There is a multicollinearity between parameter 1 and parameter 3 , parameter 1 and 9
signal_df.corr()
| Parameter 1 | Parameter 2 | Parameter 3 | Parameter 4 | Parameter 5 | Parameter 6 | Parameter 7 | Parameter 8 | Parameter 9 | Parameter 10 | Parameter 11 | Signal_Strength | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Parameter 1 | 1.000000 | -0.255124 | 0.667437 | 0.111025 | 0.085886 | -0.140580 | -0.103777 | 0.670195 | -0.686685 | 0.190269 | -0.061596 | 0.119024 |
| Parameter 2 | -0.255124 | 1.000000 | -0.551248 | -0.002449 | 0.055154 | -0.020945 | 0.071701 | 0.023943 | 0.247111 | -0.256948 | -0.197812 | -0.395214 |
| Parameter 3 | 0.667437 | -0.551248 | 1.000000 | 0.143892 | 0.210195 | -0.048004 | 0.047358 | 0.357962 | -0.550310 | 0.326062 | 0.105108 | 0.228057 |
| Parameter 4 | 0.111025 | -0.002449 | 0.143892 | 1.000000 | 0.026656 | 0.160527 | 0.201038 | 0.324522 | -0.083143 | -0.011837 | 0.063281 | 0.013640 |
| Parameter 5 | 0.085886 | 0.055154 | 0.210195 | 0.026656 | 1.000000 | 0.000749 | 0.045773 | 0.193592 | -0.270893 | 0.394557 | -0.223824 | -0.130988 |
| Parameter 6 | -0.140580 | -0.020945 | -0.048004 | 0.160527 | 0.000749 | 1.000000 | 0.667246 | -0.018071 | 0.056631 | 0.054126 | -0.080125 | -0.050463 |
| Parameter 7 | -0.103777 | 0.071701 | 0.047358 | 0.201038 | 0.045773 | 0.667246 | 1.000000 | 0.078141 | -0.079257 | 0.035291 | -0.217829 | -0.177855 |
| Parameter 8 | 0.670195 | 0.023943 | 0.357962 | 0.324522 | 0.193592 | -0.018071 | 0.078141 | 1.000000 | -0.355617 | 0.146036 | -0.504995 | -0.184252 |
| Parameter 9 | -0.686685 | 0.247111 | -0.550310 | -0.083143 | -0.270893 | 0.056631 | -0.079257 | -0.355617 | 1.000000 | -0.214134 | 0.213418 | -0.055245 |
| Parameter 10 | 0.190269 | -0.256948 | 0.326062 | -0.011837 | 0.394557 | 0.054126 | 0.035291 | 0.146036 | -0.214134 | 1.000000 | 0.091621 | 0.248835 |
| Parameter 11 | -0.061596 | -0.197812 | 0.105108 | 0.063281 | -0.223824 | -0.080125 | -0.217829 | -0.504995 | 0.213418 | 0.091621 | 1.000000 | 0.480343 |
| Signal_Strength | 0.119024 | -0.395214 | 0.228057 | 0.013640 | -0.130988 | -0.050463 | -0.177855 | -0.184252 | -0.055245 | 0.248835 | 0.480343 | 1.000000 |
D. Visualise distribution of the target variable. [2 Marks]
fig, (ax1) = plt.subplots(ncols=1, figsize=(20, 10))
def label_function(val):
return f'{val / 100 * len(signal_df):.0f} ({val:.0f})%'
signal_df.groupby('Signal_Strength').size().plot(kind='pie', autopct=label_function, textprops={'fontsize': 10},
ax=ax1)
ax1.set_ylabel('Signal Strength', size=22)
plt.show()
sns.distplot(signal_df['Signal_Strength'], kde=True, hist=True)
<matplotlib.axes._subplots.AxesSubplot at 0x7f77a650ce90>
sns.countplot(data=signal_df, x= 'Signal_Strength')
<matplotlib.axes._subplots.AxesSubplot at 0x7f7794131d50>
signal_df['Signal_Strength'].hist(figsize=(10, 4));
E. Share insights from the initial data analysis (at least 2). [2 Marks]
Lets also check % percentage of element in each class
signal_df['Signal_Strength'].value_counts(1)*100
5 42.457689 6 39.367182 7 12.288447 4 3.899926 8 1.250920 3 0.735835 Name: Signal_Strength, dtype: float64
def replaceOutlierWithMedian(dataset, column):
q1,q3 = np.quantile(dataset[column], [0.25,0.75])
IQR = q3 - q1
whisker_width = 1.5
lower_whisker = q1-(1.5*IQR)
upper_whisker = q3 +(1.5*IQR)
dataset[column]=np.where(dataset[column]>upper_whisker,dataset[column].median(),np.where(dataset[column]<lower_whisker,dataset[column].median(),dataset[column]))
print('Outlier replaced from ', column , " feature")
Replacing outliers with median values
for feature in signal_df.drop('Signal_Strength',axis=1).columns:
replaceOutlierWithMedian(signal_df, feature)
Outlier replaced from Parameter 1 feature Outlier replaced from Parameter 2 feature Outlier replaced from Parameter 3 feature Outlier replaced from Parameter 4 feature Outlier replaced from Parameter 5 feature Outlier replaced from Parameter 6 feature Outlier replaced from Parameter 7 feature Outlier replaced from Parameter 8 feature Outlier replaced from Parameter 9 feature Outlier replaced from Parameter 10 feature Outlier replaced from Parameter 11 feature
### Outliers are replaced with median value and plot it again
import math as math
plt.style.use('bmh') #Apply plot style to all the plots
def compareDistBoxPlt(df, no_of_feature, figXmultiplier, figYMultiplier):
N = len(no_of_feature)
cols =2;
rows = int(math.ceil(N/cols))
rows =rows*2
fig,axs = plt.subplots(nrows=rows, ncols=cols, figsize=(figXmultiplier*rows,figYMultiplier*cols))
axs = axs.flatten()
for i, ax in enumerate(axs):
ax.set_axis_off()
count = 0
for i, column in enumerate(no_of_feature):
sns.distplot(df[column],ax=axs[count]);
axs[count].set_title(column+ " dist plot")
axs[count].set_xlabel("")
axs[count].set_axis_on()
count=count+1
sns.boxplot(data=df, x=column,ax = axs[count]);
axs[count].set_title(column+ " Box plot")
axs[count].set_xlabel("")
axs[count].set_axis_on()
count=count+1
plt.show()
compareDistBoxPlt(signal_df, signal_df.drop('Signal_Strength',axis=1).columns, 1.2, 35);
Observations:
When Outliers will replace with median Values , Distribution become sharper and Standard deviation become lesser. The End result is there might be some outliers which is the result of previous outliers handling. We can live with that small outliiers.
X= signal_df.drop('Signal_Strength',axis=1)
y = signal_df[['Signal_Strength']]
Convert y labels since class weights give error if y_labels start from 3. It is expecting labels should start from 0 and even if we dont use label encoder, in this case 3 extra neuron will be taken which is of no use.
encoder = LabelEncoder()
encoder.fit(y)
y = encoder.transform(y)
y =pd.DataFrame(y, columns=['Signal_Strength'])
Using Label Encoder We converted 3 -->0 , 4-->1 ,5-->2, 6-->3 , 7 -->4 and 8-->5
B. Split the data into train & test with 70:30 proportion.[
X_train, X_test,y_train,y_test = train_test_split(X,y, test_size=0.30, random_state=1, stratify=y)
C. Print shape of all the 4 variables and verify if train and test data is in sync. [1 Marks]
print('Shape of X_train', X_train.shape)
print('Shape of y_train', y_train.shape)
print('Shape of y_train', X_test.shape)
print('Shape of y_test', y_test.shape)
Shape of X_train (951, 11) Shape of y_train (951, 1) Shape of y_train (408, 11) Shape of y_test (408, 1)
###Checking train and test data are in sync
y_train.value_counts(normalize=True)*100
Signal_Strength 2 42.481598 3 39.327024 4 12.302839 1 3.890641 5 1.261830 0 0.736067 dtype: float64
y_test.value_counts(normalize=True)*100
Signal_Strength 2 42.401961 3 39.460784 4 12.254902 1 3.921569 5 1.225490 0 0.735294 dtype: float64
Observations: From the above % of each element, it is verified that ratio of each class are same in both test and train.
D. Normalise the train and test data with appropriate method. [2 Marks]
scaler = StandardScaler()
X_train_norm = scaler.fit_transform(X_train)
X_test_norm = scaler.transform(X_test)
E. Transform Labels into format acceptable by Neural Network [2 Marks]
y_train_cal = to_categorical(y_train, dtype="uint8")
y_test_cal = to_categorical(y_test, dtype="uint8")
input_dim = len(X.columns)
input_dim
11
np.random.seed(1)
import random
import tensorflow as tf
random.seed(1)
tf.random.set_seed(1)
model = Sequential()
model.add(Dense(units=128, input_dim = input_dim,kernel_initializer='he_uniform',activation='relu'))
model.add(Dense(units=64, kernel_initializer='he_uniform',activation='relu'))
model.add(Dense(units=32, kernel_initializer='he_uniform',activation='relu'))
model.add(Dense(units=6, activation='softmax'))
model.compile(loss = "categorical_crossentropy", optimizer= Adam(learning_rate=1e-3), metrics = ['accuracy'])
plot_model(
model,
to_file="model.png",
show_shapes=True,
show_dtype=True,
show_layer_names=True,
expand_nested=True,
dpi=96,
)
B. Train the classifier using previously designed Architecture [2 Marks]
stop_early = EarlyStopping(monitor='val_loss', mode='min', patience=7)
reduce_lr = ReduceLROnPlateau(monitor='val_loss',factor=0.1,patience=5,min_lr=0.00001,model='auto')
history_default_NN = model.fit(X_train_norm, y_train_cal, validation_data=(X_test_norm, y_test_cal), callbacks=[reduce_lr, stop_early], epochs=100, batch_size=32, verbose=2)
Epoch 1/100 30/30 - 1s - loss: 1.4954 - accuracy: 0.3943 - val_loss: 1.1948 - val_accuracy: 0.5123 - lr: 0.0010 - 617ms/epoch - 21ms/step Epoch 2/100 30/30 - 0s - loss: 1.0861 - accuracy: 0.5794 - val_loss: 1.0997 - val_accuracy: 0.5760 - lr: 0.0010 - 103ms/epoch - 3ms/step Epoch 3/100 30/30 - 0s - loss: 1.0139 - accuracy: 0.5952 - val_loss: 1.0567 - val_accuracy: 0.5956 - lr: 0.0010 - 105ms/epoch - 3ms/step Epoch 4/100 30/30 - 0s - loss: 0.9695 - accuracy: 0.6057 - val_loss: 1.0484 - val_accuracy: 0.5931 - lr: 0.0010 - 100ms/epoch - 3ms/step Epoch 5/100 30/30 - 0s - loss: 0.9334 - accuracy: 0.6183 - val_loss: 1.0273 - val_accuracy: 0.5956 - lr: 0.0010 - 103ms/epoch - 3ms/step Epoch 6/100 30/30 - 0s - loss: 0.8975 - accuracy: 0.6351 - val_loss: 1.0299 - val_accuracy: 0.6005 - lr: 0.0010 - 100ms/epoch - 3ms/step Epoch 7/100 30/30 - 0s - loss: 0.8739 - accuracy: 0.6383 - val_loss: 1.0245 - val_accuracy: 0.5760 - lr: 0.0010 - 103ms/epoch - 3ms/step Epoch 8/100 30/30 - 0s - loss: 0.8525 - accuracy: 0.6509 - val_loss: 1.0224 - val_accuracy: 0.5907 - lr: 0.0010 - 99ms/epoch - 3ms/step Epoch 9/100 30/30 - 0s - loss: 0.8271 - accuracy: 0.6435 - val_loss: 1.0260 - val_accuracy: 0.5588 - lr: 0.0010 - 97ms/epoch - 3ms/step Epoch 10/100 30/30 - 0s - loss: 0.7995 - accuracy: 0.6751 - val_loss: 1.0224 - val_accuracy: 0.5784 - lr: 0.0010 - 99ms/epoch - 3ms/step Epoch 11/100 30/30 - 0s - loss: 0.7902 - accuracy: 0.6835 - val_loss: 1.0382 - val_accuracy: 0.5833 - lr: 0.0010 - 99ms/epoch - 3ms/step Epoch 12/100 30/30 - 0s - loss: 0.7660 - accuracy: 0.6719 - val_loss: 1.0380 - val_accuracy: 0.5882 - lr: 0.0010 - 102ms/epoch - 3ms/step Epoch 13/100 30/30 - 0s - loss: 0.7432 - accuracy: 0.6982 - val_loss: 1.0438 - val_accuracy: 0.5662 - lr: 0.0010 - 99ms/epoch - 3ms/step Epoch 14/100 30/30 - 0s - loss: 0.7071 - accuracy: 0.7256 - val_loss: 1.0373 - val_accuracy: 0.5809 - lr: 1.0000e-04 - 101ms/epoch - 3ms/step Epoch 15/100 30/30 - 0s - loss: 0.6995 - accuracy: 0.7234 - val_loss: 1.0399 - val_accuracy: 0.5858 - lr: 1.0000e-04 - 99ms/epoch - 3ms/step
C. Plot 2 separate visuals. [3 Marks]
history = pd.DataFrame(history_default_NN.history)
plt.plot(history['loss'])
plt.plot(history['val_loss'])
plt.legend(['Training Loss', 'Validation Loss'])
plt.show()
ii. Training Accuracy and Validation Accuracy
plt.plot(history['accuracy'])
plt.plot(history['val_accuracy'])
plt.legend(['Training Accuracy', 'Validation Accuracy'])
plt.show()
result= {}
loss, accuracy = model.evaluate(X_test_norm, y_test_cal)
result['Default NN with Original Dataset'] = [loss, accuracy, model, history_default_NN]
print('loss in Testing data ', loss)
print('Accuracy in Testing data ', accuracy)
13/13 [==============================] - 0s 2ms/step - loss: 1.0399 - accuracy: 0.5858 loss in Testing data 1.039906620979309 Accuracy in Testing data 0.5857843160629272
Accuracy in testing data is coming as 0.585 with default neural Network.
y_pre = model.predict(X_test_norm)
y_pred_final=[]
for i in y_pre:
y_pred_final.append(np.argmax(i))
xcal = np.sort(y_train['Signal_Strength'].unique())
cm = confusion_matrix(y_test,y_pred_final)
plt.figure(figsize=(10,7))
sns.heatmap(cm,annot=True,fmt='d',xticklabels=xcal,yticklabels=xcal)
plt.xlabel('Predicted')
plt.ylabel('Truth')
Text(72.5, 0.5, 'Truth')
print(classification_report(y_test, y_pred_final))
precision recall f1-score support
0 0.00 0.00 0.00 3
1 0.00 0.00 0.00 16
2 0.64 0.79 0.71 173
3 0.55 0.55 0.55 161
4 0.42 0.30 0.35 50
5 0.00 0.00 0.00 5
accuracy 0.59 408
macro avg 0.27 0.27 0.27 408
weighted avg 0.54 0.59 0.56 408
D. Design new architecture/update existing architecture in attempt to improve the performance of the model. [2 Marks]
from tensorflow import keras
def build_model(hp):
model = Sequential()
for layer in range(hp.Int('num_layer', 1,10)):
model.add(Dense(units=hp.Int("Units_"+str(layer+1), min_value=16, max_value=512, step=32), activation= hp.Choice('activation_' +str(layer+1), ["relu", 'LeakyReLU']), kernel_initializer='he_uniform'))
model.add(BatchNormalization())
model.add(Dropout(rate=hp.Float('dropout_' + str(layer + 1) ,
min_value=0.0,
max_value=0.9,
step=0.1)))
model.add(Dense(units=6, activation='softmax'))
# metric=AUC(
# num_thresholds=200, curve='PR',
# summation_method='interpolation', name=None, dtype=None,
# thresholds=None, multi_label=False, label_weights=None)
metric = [
#keras.metrics.TruePositives(name='tp'),
#keras.metrics.FalsePositives(name='fp'),
#keras.metrics.TrueNegatives(name='tn'),
#keras.metrics.FalseNegatives(name='fn'),
#keras.metrics.Accuracy(name='accuracy'),
#keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
#keras.metrics.AUC(name='auc'),
keras.metrics.AUC(name='prc', curve='PR'), # precision-recall curve
]
#metric=tf.keras.metrics.AUC(num_thresholds=200, curve='PR', summation_method='interpolation', name='prc', dtype=None, thresholds=None, multi_label=True, label_weights=None)
learning_rate = hp.Float("lr", min_value=1e-4, max_value=1e-2, sampling="log")
#learning_rate = hp.Choice('learning_rate',values=[1e-1, 1e-2, 1e-3])
model.compile(optimizer=Adam(learning_rate=learning_rate),loss=CategoricalCrossentropy(),metrics=['accuracy',metric],)
return model
Hypertuning the neural network to find the best number of hidden layer and neuron.
tuner = Hyperband(build_model, objective=kerastuner.Objective("val_accuracy", direction="max"), max_epochs=10, factor=2, hyperparameters=None, directory = 'my_dir3',project_name = '11',overwrite=True)
tuner.search_space_summary()
Search space summary
Default search space size: 5
num_layer (Int)
{'default': None, 'conditions': [], 'min_value': 1, 'max_value': 10, 'step': 1, 'sampling': None}
Units_1 (Int)
{'default': None, 'conditions': [], 'min_value': 16, 'max_value': 512, 'step': 32, 'sampling': None}
activation_1 (Choice)
{'default': 'relu', 'conditions': [], 'values': ['relu', 'LeakyReLU'], 'ordered': False}
dropout_1 (Float)
{'default': 0.0, 'conditions': [], 'min_value': 0.0, 'max_value': 0.9, 'step': 0.1, 'sampling': None}
lr (Float)
{'default': 0.0001, 'conditions': [], 'min_value': 0.0001, 'max_value': 0.01, 'step': None, 'sampling': 'log'}
tuner.search(X_train_norm, y_train_cal,validation_data=(X_test_norm,y_test_cal), callbacks=[reduce_lr, stop_early])
Trial 46 Complete [00h 00m 06s] val_accuracy: 0.5857843160629272 Best val_accuracy So Far: 0.6029411554336548 Total elapsed time: 00h 03m 47s INFO:tensorflow:Oracle triggered exit
INFO:tensorflow:Oracle triggered exit
#tuner.results_summary()
Get the best Model
best_default_model = tuner.get_best_models()[0]
Trained the model with best parameters
history_hypertuned = best_default_model.fit(X_train_norm, y_train_cal, validation_data=(X_test_norm, y_test_cal), callbacks=[reduce_lr, stop_early], epochs=100, batch_size=32, verbose=2)
Epoch 1/100 30/30 - 2s - loss: 1.1024 - accuracy: 0.6215 - recall: 0.4479 - prc: 0.6627 - val_loss: 1.0879 - val_accuracy: 0.6005 - val_recall: 0.3946 - val_prc: 0.5965 - lr: 5.1592e-04 - 2s/epoch - 78ms/step Epoch 2/100 30/30 - 0s - loss: 1.1655 - accuracy: 0.5836 - recall: 0.4143 - prc: 0.6159 - val_loss: 1.0791 - val_accuracy: 0.5784 - val_recall: 0.3799 - val_prc: 0.5989 - lr: 5.1592e-04 - 210ms/epoch - 7ms/step Epoch 3/100 30/30 - 0s - loss: 1.1554 - accuracy: 0.5952 - recall: 0.4185 - prc: 0.6249 - val_loss: 1.0599 - val_accuracy: 0.5809 - val_recall: 0.3946 - val_prc: 0.6041 - lr: 5.1592e-04 - 214ms/epoch - 7ms/step Epoch 4/100 30/30 - 0s - loss: 1.1285 - accuracy: 0.6088 - recall: 0.4416 - prc: 0.6323 - val_loss: 1.0620 - val_accuracy: 0.5686 - val_recall: 0.3873 - val_prc: 0.5970 - lr: 5.1592e-04 - 216ms/epoch - 7ms/step Epoch 5/100 30/30 - 0s - loss: 1.1172 - accuracy: 0.6109 - recall: 0.4332 - prc: 0.6360 - val_loss: 1.0528 - val_accuracy: 0.5735 - val_recall: 0.4167 - val_prc: 0.5990 - lr: 5.1592e-04 - 216ms/epoch - 7ms/step Epoch 6/100 30/30 - 0s - loss: 1.0902 - accuracy: 0.6109 - recall: 0.4543 - prc: 0.6382 - val_loss: 1.0429 - val_accuracy: 0.5809 - val_recall: 0.4338 - val_prc: 0.6036 - lr: 5.1592e-04 - 211ms/epoch - 7ms/step Epoch 7/100 30/30 - 0s - loss: 1.0782 - accuracy: 0.5973 - recall: 0.4522 - prc: 0.6346 - val_loss: 1.0392 - val_accuracy: 0.5809 - val_recall: 0.4461 - val_prc: 0.6003 - lr: 5.1592e-04 - 214ms/epoch - 7ms/step Epoch 8/100 30/30 - 0s - loss: 1.1192 - accuracy: 0.5836 - recall: 0.4458 - prc: 0.6273 - val_loss: 1.0387 - val_accuracy: 0.5735 - val_recall: 0.4485 - val_prc: 0.6033 - lr: 5.1592e-04 - 216ms/epoch - 7ms/step Epoch 9/100 30/30 - 0s - loss: 1.1278 - accuracy: 0.5983 - recall: 0.4564 - prc: 0.6087 - val_loss: 1.0209 - val_accuracy: 0.5784 - val_recall: 0.4338 - val_prc: 0.6113 - lr: 5.1592e-04 - 211ms/epoch - 7ms/step Epoch 10/100 30/30 - 0s - loss: 1.5071 - accuracy: 0.4679 - recall: 0.3607 - prc: 0.4486 - val_loss: 1.0160 - val_accuracy: 0.5882 - val_recall: 0.4510 - val_prc: 0.6108 - lr: 5.1592e-04 - 215ms/epoch - 7ms/step Epoch 11/100 30/30 - 0s - loss: 1.5184 - accuracy: 0.4721 - recall: 0.3176 - prc: 0.4312 - val_loss: 1.0195 - val_accuracy: 0.5907 - val_recall: 0.4485 - val_prc: 0.6114 - lr: 5.1592e-04 - 210ms/epoch - 7ms/step Epoch 12/100 30/30 - 0s - loss: 1.3483 - accuracy: 0.4848 - recall: 0.3775 - prc: 0.4876 - val_loss: 1.0146 - val_accuracy: 0.5784 - val_recall: 0.4436 - val_prc: 0.6071 - lr: 5.1592e-04 - 212ms/epoch - 7ms/step Epoch 13/100 30/30 - 0s - loss: 1.4381 - accuracy: 0.4816 - recall: 0.3544 - prc: 0.4479 - val_loss: 1.0025 - val_accuracy: 0.5907 - val_recall: 0.4240 - val_prc: 0.6175 - lr: 5.1592e-04 - 212ms/epoch - 7ms/step Epoch 14/100 30/30 - 0s - loss: 1.3953 - accuracy: 0.4858 - recall: 0.3502 - prc: 0.4675 - val_loss: 1.0017 - val_accuracy: 0.5907 - val_recall: 0.4069 - val_prc: 0.6182 - lr: 5.1592e-04 - 208ms/epoch - 7ms/step Epoch 15/100 30/30 - 0s - loss: 1.3632 - accuracy: 0.4753 - recall: 0.3481 - prc: 0.4723 - val_loss: 1.0012 - val_accuracy: 0.6005 - val_recall: 0.4069 - val_prc: 0.6165 - lr: 5.1592e-04 - 219ms/epoch - 7ms/step Epoch 16/100 30/30 - 0s - loss: 1.2991 - accuracy: 0.5016 - recall: 0.3502 - prc: 0.4912 - val_loss: 0.9919 - val_accuracy: 0.6103 - val_recall: 0.4118 - val_prc: 0.6167 - lr: 5.1592e-04 - 211ms/epoch - 7ms/step Epoch 17/100 30/30 - 0s - loss: 1.3023 - accuracy: 0.4890 - recall: 0.3785 - prc: 0.4804 - val_loss: 0.9871 - val_accuracy: 0.5882 - val_recall: 0.4191 - val_prc: 0.6167 - lr: 5.1592e-04 - 215ms/epoch - 7ms/step Epoch 18/100 30/30 - 0s - loss: 1.2980 - accuracy: 0.4869 - recall: 0.3417 - prc: 0.4752 - val_loss: 0.9919 - val_accuracy: 0.6029 - val_recall: 0.4093 - val_prc: 0.6162 - lr: 5.1592e-04 - 211ms/epoch - 7ms/step Epoch 19/100 30/30 - 0s - loss: 1.3181 - accuracy: 0.4805 - recall: 0.3565 - prc: 0.4779 - val_loss: 0.9879 - val_accuracy: 0.5980 - val_recall: 0.4020 - val_prc: 0.6210 - lr: 5.1592e-04 - 212ms/epoch - 7ms/step Epoch 20/100 30/30 - 0s - loss: 1.2823 - accuracy: 0.4911 - recall: 0.3544 - prc: 0.4840 - val_loss: 0.9869 - val_accuracy: 0.5907 - val_recall: 0.4510 - val_prc: 0.6160 - lr: 5.1592e-04 - 207ms/epoch - 7ms/step Epoch 21/100 30/30 - 0s - loss: 1.2612 - accuracy: 0.4848 - recall: 0.3607 - prc: 0.4870 - val_loss: 0.9802 - val_accuracy: 0.6054 - val_recall: 0.4436 - val_prc: 0.6226 - lr: 5.1592e-04 - 207ms/epoch - 7ms/step Epoch 22/100 30/30 - 0s - loss: 1.2299 - accuracy: 0.4932 - recall: 0.3807 - prc: 0.5103 - val_loss: 0.9832 - val_accuracy: 0.5956 - val_recall: 0.4093 - val_prc: 0.6198 - lr: 5.1592e-04 - 209ms/epoch - 7ms/step Epoch 23/100 30/30 - 0s - loss: 1.1693 - accuracy: 0.4911 - recall: 0.3775 - prc: 0.5190 - val_loss: 0.9839 - val_accuracy: 0.5956 - val_recall: 0.4167 - val_prc: 0.6163 - lr: 5.1592e-04 - 208ms/epoch - 7ms/step Epoch 24/100 30/30 - 0s - loss: 1.1832 - accuracy: 0.5005 - recall: 0.3754 - prc: 0.5145 - val_loss: 0.9819 - val_accuracy: 0.5858 - val_recall: 0.4461 - val_prc: 0.6180 - lr: 5.1592e-04 - 212ms/epoch - 7ms/step Epoch 25/100 30/30 - 0s - loss: 1.2143 - accuracy: 0.5152 - recall: 0.3701 - prc: 0.5025 - val_loss: 0.9813 - val_accuracy: 0.5956 - val_recall: 0.4436 - val_prc: 0.6179 - lr: 5.1592e-04 - 209ms/epoch - 7ms/step Epoch 26/100 30/30 - 0s - loss: 1.1935 - accuracy: 0.5279 - recall: 0.3964 - prc: 0.5215 - val_loss: 0.9842 - val_accuracy: 0.5882 - val_recall: 0.4265 - val_prc: 0.6185 - lr: 5.1592e-04 - 218ms/epoch - 7ms/step Epoch 27/100 30/30 - 0s - loss: 1.1492 - accuracy: 0.5457 - recall: 0.4017 - prc: 0.5373 - val_loss: 0.9845 - val_accuracy: 0.5907 - val_recall: 0.4289 - val_prc: 0.6172 - lr: 5.1592e-05 - 214ms/epoch - 7ms/step Epoch 28/100 30/30 - 0s - loss: 1.2190 - accuracy: 0.5121 - recall: 0.3565 - prc: 0.4947 - val_loss: 0.9848 - val_accuracy: 0.5931 - val_recall: 0.4289 - val_prc: 0.6175 - lr: 5.1592e-05 - 212ms/epoch - 7ms/step
Predict the Accuracy and loss with hypertuned model
y_pre = best_default_model.predict(X_test_norm)
y_pred_final=[]
for i in y_pre:
y_pred_final.append(np.argmax(i))
loss, accuracy,recall,prc = best_default_model.evaluate(X_test_norm, y_test_cal)
result['Hypertuned Default NN with Original Dataset'] = [loss, accuracy, best_default_model, history_hypertuned]
print('loss in Testing data ', loss)
print('Accuracy in Testing data ', accuracy)
13/13 [==============================] - 0s 3ms/step - loss: 0.9848 - accuracy: 0.5931 - recall: 0.4289 - prc: 0.6175 loss in Testing data 0.9847705960273743 Accuracy in Testing data 0.593137264251709
Observations:
cm = confusion_matrix(y_pred_final,y_test)
plt.figure(figsize=(10,7))
sns.heatmap(cm,annot=True,fmt='d',xticklabels=xcal,yticklabels=xcal)
plt.xlabel('Truth')
plt.ylabel('Predicted')
Text(72.5, 0.5, 'Predicted')
print(classification_report(y_test, y_pred_final))
precision recall f1-score support
0 0.00 0.00 0.00 3
1 0.00 0.00 0.00 16
2 0.66 0.73 0.70 173
3 0.54 0.66 0.59 161
4 0.47 0.16 0.24 50
5 0.00 0.00 0.00 5
accuracy 0.59 408
macro avg 0.28 0.26 0.25 408
weighted avg 0.55 0.59 0.56 408
from sklearn.utils.class_weight import compute_class_weight
class_weights = compute_class_weight(
class_weight = "balanced",
classes = np.unique(y_train),
y = y_train['Signal_Strength'].values)
class_weights = dict(zip(np.unique(y_train), class_weights))
#class_weights = dict(enumerate(class_weights))
class_weights
{0: 22.642857142857142,
1: 4.283783783783784,
2: 0.39232673267326734,
3: 0.42379679144385024,
4: 1.3547008547008548,
5: 13.208333333333334}
Hypertuned the model with class weights
tuner = Hyperband(build_model, objective=kerastuner.Objective("val_accuracy", direction="max"), max_epochs=10, factor=2, directory = 'my_dir4',project_name = '431', overwrite=True)
tuner.search_space_summary()
tuner.search(X_train_norm, y_train_cal,validation_data=(X_test_norm,y_test_cal), callbacks=[reduce_lr, stop_early],class_weight=class_weights)
Trial 46 Complete [00h 00m 08s] val_accuracy: 0.39950981736183167 Best val_accuracy So Far: 0.46078431606292725 Total elapsed time: 00h 05m 28s INFO:tensorflow:Oracle triggered exit
INFO:tensorflow:Oracle triggered exit
Finding the best model from the above hypertuned models
best_model_cw = tuner.get_best_models()[0]
best_model_cw
<keras.engine.sequential.Sequential at 0x7f783cbec3d0>
history_class_weights = best_model_cw.fit(X_train_norm, y_train_cal, validation_data=(X_test_norm,y_test_cal), callbacks=[reduce_lr, stop_early], epochs=100, batch_size=32, verbose=2,class_weight=class_weights)
y_pre = best_model_cw.predict(X_test_norm)
y_pred_final=[]
for i in y_pre:
y_pred_final.append(np.argmax(i))
Epoch 1/100 30/30 - 2s - loss: 0.9276 - accuracy: 0.3849 - recall: 0.1987 - prc: 0.3702 - val_loss: 1.2657 - val_accuracy: 0.4804 - val_recall: 0.2181 - val_prc: 0.4702 - lr: 7.5883e-04 - 2s/epoch - 74ms/step Epoch 2/100 30/30 - 0s - loss: 0.9198 - accuracy: 0.3912 - recall: 0.1998 - prc: 0.3875 - val_loss: 1.2918 - val_accuracy: 0.4436 - val_recall: 0.2034 - val_prc: 0.4489 - lr: 7.5883e-04 - 209ms/epoch - 7ms/step Epoch 3/100 30/30 - 0s - loss: 0.8862 - accuracy: 0.4301 - recall: 0.2292 - prc: 0.4209 - val_loss: 1.2580 - val_accuracy: 0.4681 - val_recall: 0.2377 - val_prc: 0.4682 - lr: 7.5883e-04 - 209ms/epoch - 7ms/step Epoch 4/100 30/30 - 0s - loss: 0.8906 - accuracy: 0.4416 - recall: 0.2271 - prc: 0.4242 - val_loss: 1.2915 - val_accuracy: 0.4559 - val_recall: 0.1863 - val_prc: 0.4467 - lr: 7.5883e-04 - 207ms/epoch - 7ms/step Epoch 5/100 30/30 - 0s - loss: 0.8616 - accuracy: 0.4269 - recall: 0.2250 - prc: 0.4418 - val_loss: 1.2742 - val_accuracy: 0.4730 - val_recall: 0.2010 - val_prc: 0.4593 - lr: 7.5883e-04 - 205ms/epoch - 7ms/step Epoch 6/100 30/30 - 0s - loss: 0.8165 - accuracy: 0.4574 - recall: 0.2555 - prc: 0.4687 - val_loss: 1.2245 - val_accuracy: 0.4975 - val_recall: 0.2206 - val_prc: 0.4818 - lr: 7.5883e-04 - 205ms/epoch - 7ms/step Epoch 7/100 30/30 - 0s - loss: 0.8523 - accuracy: 0.4585 - recall: 0.2576 - prc: 0.4535 - val_loss: 1.2597 - val_accuracy: 0.4534 - val_recall: 0.2034 - val_prc: 0.4528 - lr: 7.5883e-04 - 206ms/epoch - 7ms/step Epoch 8/100 30/30 - 0s - loss: 0.8834 - accuracy: 0.4385 - recall: 0.2513 - prc: 0.4457 - val_loss: 1.2560 - val_accuracy: 0.4436 - val_recall: 0.2083 - val_prc: 0.4477 - lr: 7.5883e-04 - 209ms/epoch - 7ms/step Epoch 9/100 30/30 - 0s - loss: 0.8573 - accuracy: 0.4469 - recall: 0.2271 - prc: 0.4461 - val_loss: 1.2864 - val_accuracy: 0.4093 - val_recall: 0.1863 - val_prc: 0.4098 - lr: 7.5883e-04 - 207ms/epoch - 7ms/step Epoch 10/100 30/30 - 0s - loss: 0.8435 - accuracy: 0.4501 - recall: 0.2471 - prc: 0.4480 - val_loss: 1.1754 - val_accuracy: 0.4730 - val_recall: 0.2451 - val_prc: 0.4849 - lr: 7.5883e-04 - 205ms/epoch - 7ms/step Epoch 11/100 30/30 - 0s - loss: 1.7182 - accuracy: 0.3554 - recall: 0.1725 - prc: 0.3224 - val_loss: 1.3590 - val_accuracy: 0.3603 - val_recall: 0.1912 - val_prc: 0.3650 - lr: 7.5883e-04 - 206ms/epoch - 7ms/step Epoch 12/100 30/30 - 0s - loss: 1.7146 - accuracy: 0.2776 - recall: 0.1325 - prc: 0.2685 - val_loss: 1.3319 - val_accuracy: 0.4093 - val_recall: 0.1887 - val_prc: 0.3931 - lr: 7.5883e-04 - 209ms/epoch - 7ms/step Epoch 13/100 30/30 - 0s - loss: 1.7255 - accuracy: 0.3165 - recall: 0.1483 - prc: 0.2878 - val_loss: 1.3337 - val_accuracy: 0.4093 - val_recall: 0.2181 - val_prc: 0.4148 - lr: 7.5883e-04 - 211ms/epoch - 7ms/step Epoch 14/100 30/30 - 0s - loss: 1.7471 - accuracy: 0.3091 - recall: 0.1588 - prc: 0.2869 - val_loss: 1.2795 - val_accuracy: 0.4191 - val_recall: 0.2206 - val_prc: 0.4318 - lr: 7.5883e-04 - 203ms/epoch - 7ms/step Epoch 15/100 30/30 - 0s - loss: 1.4625 - accuracy: 0.2871 - recall: 0.1241 - prc: 0.2608 - val_loss: 1.2530 - val_accuracy: 0.4657 - val_recall: 0.1863 - val_prc: 0.4349 - lr: 7.5883e-04 - 200ms/epoch - 7ms/step Epoch 16/100 30/30 - 0s - loss: 1.6993 - accuracy: 0.2702 - recall: 0.1041 - prc: 0.2554 - val_loss: 1.2549 - val_accuracy: 0.4534 - val_recall: 0.1912 - val_prc: 0.4345 - lr: 7.5883e-05 - 205ms/epoch - 7ms/step Epoch 17/100 30/30 - 0s - loss: 1.2942 - accuracy: 0.3102 - recall: 0.1230 - prc: 0.2790 - val_loss: 1.2434 - val_accuracy: 0.4559 - val_recall: 0.1985 - val_prc: 0.4453 - lr: 7.5883e-05 - 204ms/epoch - 7ms/step
print(classification_report(y_test, y_pred_final))
precision recall f1-score support
0 0.00 0.00 0.00 3
1 0.18 0.44 0.25 16
2 0.67 0.58 0.62 173
3 0.49 0.29 0.36 161
4 0.33 0.66 0.44 50
5 0.00 0.00 0.00 5
accuracy 0.46 408
macro avg 0.28 0.33 0.28 408
weighted avg 0.53 0.46 0.47 408
loss, accuracy,recall ,prc = best_model_cw.evaluate(X_test_norm, y_test_cal)
result['Hypertuned Default NN with Original Dataset with Class Weights'] = [loss, accuracy, best_model_cw, history_class_weights]
print('loss in Testing data ', loss)
print('Accuracy in Testing data ', accuracy)
13/13 [==============================] - 0s 3ms/step - loss: 1.2434 - accuracy: 0.4559 - recall: 0.1985 - prc: 0.4453 loss in Testing data 1.24338698387146 Accuracy in Testing data 0.45588234066963196
from imblearn.over_sampling import SMOTE
sm = SMOTE()
X_train_sm, y_train_sm = sm.fit_resample(X_train, y_train)
X_train_sm_norm = scaler.fit_transform(X_train_sm)
X_test__sm_norm = scaler.transform(X_test)
y_train_sm.value_counts()
Signal_Strength 0 404 1 404 2 404 3 404 4 404 5 404 dtype: int64
Hypertuning with Smote training data
tuner = Hyperband(build_model, objective=kerastuner.Objective("val_accuracy", direction="max"), max_epochs=10, factor=2, hyperparameters=None, directory = 'my_dir4',project_name = '221', overwrite=True)
tuner.search_space_summary()
Search space summary
Default search space size: 5
num_layer (Int)
{'default': None, 'conditions': [], 'min_value': 1, 'max_value': 10, 'step': 1, 'sampling': None}
Units_1 (Int)
{'default': None, 'conditions': [], 'min_value': 16, 'max_value': 512, 'step': 32, 'sampling': None}
activation_1 (Choice)
{'default': 'relu', 'conditions': [], 'values': ['relu', 'LeakyReLU'], 'ordered': False}
dropout_1 (Float)
{'default': 0.0, 'conditions': [], 'min_value': 0.0, 'max_value': 0.9, 'step': 0.1, 'sampling': None}
lr (Float)
{'default': 0.0001, 'conditions': [], 'min_value': 0.0001, 'max_value': 0.01, 'step': None, 'sampling': 'log'}
y_train_sm_cal = to_categorical(y_train_sm)
tuner.search(X_train_sm_norm, y_train_sm_cal,validation_data=(X_test__sm_norm,y_test_cal), callbacks=[reduce_lr, stop_early])
Trial 46 Complete [00h 00m 09s] val_accuracy: 0.46078431606292725 Best val_accuracy So Far: 0.5196078419685364 Total elapsed time: 00h 06m 09s INFO:tensorflow:Oracle triggered exit
INFO:tensorflow:Oracle triggered exit
Select Best model in SMOTE training dataset
best_model_sm = tuner.get_best_models()[0]
best_model_sm
<keras.engine.sequential.Sequential at 0x7f7830302b50>
history_smote = best_model_sm.fit(X_train_sm_norm, y_train_sm_cal, validation_data=(X_test__sm_norm,y_test_cal), callbacks=[reduce_lr, stop_early], epochs=100, batch_size=32, verbose=2)
Epoch 1/100 76/76 - 2s - loss: 0.8386 - accuracy: 0.6906 - recall: 0.6411 - prc: 0.7702 - val_loss: 1.8720 - val_accuracy: 0.5123 - val_recall: 0.4877 - val_prc: 0.4728 - lr: 0.0097 - 2s/epoch - 29ms/step Epoch 2/100 76/76 - 0s - loss: 0.9713 - accuracy: 0.6440 - recall: 0.5796 - prc: 0.7072 - val_loss: 1.6992 - val_accuracy: 0.4142 - val_recall: 0.3505 - val_prc: 0.4071 - lr: 0.0097 - 414ms/epoch - 5ms/step Epoch 3/100 76/76 - 0s - loss: 0.8176 - accuracy: 0.6873 - recall: 0.6258 - prc: 0.7724 - val_loss: 1.7841 - val_accuracy: 0.4216 - val_recall: 0.3529 - val_prc: 0.4271 - lr: 0.0097 - 397ms/epoch - 5ms/step Epoch 4/100 76/76 - 0s - loss: 0.8076 - accuracy: 0.6856 - recall: 0.6287 - prc: 0.7696 - val_loss: 1.6274 - val_accuracy: 0.4559 - val_recall: 0.3995 - val_prc: 0.4096 - lr: 0.0097 - 401ms/epoch - 5ms/step Epoch 5/100 76/76 - 0s - loss: 0.7255 - accuracy: 0.7108 - recall: 0.6654 - prc: 0.8080 - val_loss: 1.6306 - val_accuracy: 0.4534 - val_recall: 0.3701 - val_prc: 0.4461 - lr: 0.0097 - 414ms/epoch - 5ms/step Epoch 6/100 76/76 - 0s - loss: 0.7086 - accuracy: 0.7191 - recall: 0.6621 - prc: 0.8116 - val_loss: 1.4134 - val_accuracy: 0.4804 - val_recall: 0.3873 - val_prc: 0.4437 - lr: 0.0097 - 407ms/epoch - 5ms/step Epoch 7/100 76/76 - 0s - loss: 0.6299 - accuracy: 0.7504 - recall: 0.6972 - prc: 0.8430 - val_loss: 1.4975 - val_accuracy: 0.4853 - val_recall: 0.4461 - val_prc: 0.5005 - lr: 0.0097 - 409ms/epoch - 5ms/step Epoch 8/100 76/76 - 0s - loss: 0.6128 - accuracy: 0.7649 - recall: 0.7092 - prc: 0.8525 - val_loss: 1.3376 - val_accuracy: 0.5441 - val_recall: 0.4706 - val_prc: 0.5234 - lr: 0.0097 - 406ms/epoch - 5ms/step Epoch 9/100 76/76 - 0s - loss: 0.5949 - accuracy: 0.7735 - recall: 0.7215 - prc: 0.8572 - val_loss: 1.6157 - val_accuracy: 0.4534 - val_recall: 0.3873 - val_prc: 0.4448 - lr: 0.0097 - 402ms/epoch - 5ms/step Epoch 10/100 76/76 - 0s - loss: 0.5879 - accuracy: 0.7673 - recall: 0.7219 - prc: 0.8607 - val_loss: 1.4895 - val_accuracy: 0.5490 - val_recall: 0.4706 - val_prc: 0.5027 - lr: 0.0097 - 408ms/epoch - 5ms/step Epoch 11/100 76/76 - 0s - loss: 0.5663 - accuracy: 0.7686 - recall: 0.7252 - prc: 0.8685 - val_loss: 1.5114 - val_accuracy: 0.4877 - val_recall: 0.4216 - val_prc: 0.4730 - lr: 0.0097 - 409ms/epoch - 5ms/step Epoch 12/100 76/76 - 0s - loss: 0.5550 - accuracy: 0.7752 - recall: 0.7318 - prc: 0.8738 - val_loss: 1.4873 - val_accuracy: 0.5245 - val_recall: 0.4755 - val_prc: 0.4955 - lr: 0.0097 - 411ms/epoch - 5ms/step Epoch 13/100 76/76 - 0s - loss: 0.5479 - accuracy: 0.7855 - recall: 0.7430 - prc: 0.8778 - val_loss: 1.4638 - val_accuracy: 0.5098 - val_recall: 0.4608 - val_prc: 0.5066 - lr: 0.0097 - 405ms/epoch - 5ms/step Epoch 14/100 76/76 - 0s - loss: 0.4815 - accuracy: 0.8139 - recall: 0.7735 - prc: 0.8990 - val_loss: 1.3530 - val_accuracy: 0.5319 - val_recall: 0.4608 - val_prc: 0.5289 - lr: 9.7062e-04 - 400ms/epoch - 5ms/step Epoch 15/100 76/76 - 0s - loss: 0.4182 - accuracy: 0.8276 - recall: 0.7941 - prc: 0.9231 - val_loss: 1.3677 - val_accuracy: 0.5270 - val_recall: 0.4559 - val_prc: 0.5219 - lr: 9.7062e-04 - 396ms/epoch - 5ms/step
Calculating the Accuracy and loss
y_pre = best_model_sm.predict(X_test_norm)
y_pred_final=[]
for i in y_pre:
y_pred_final.append(np.argmax(i))
loss, accuracy ,recall,prc = best_model_sm.evaluate(X_test__sm_norm, y_test_cal)
result['Hypertuned Default NN with Original Dataset With SMOTE'] = [loss, accuracy, best_model_sm, history_smote]
print('loss in Testing data ', loss)
print('Accuracy in Testing data ', accuracy)
13/13 [==============================] - 0s 3ms/step - loss: 1.3677 - accuracy: 0.5270 - recall: 0.4559 - prc: 0.5219 loss in Testing data 1.3677393198013306 Accuracy in Testing data 0.5269607901573181
print(classification_report(y_test, y_pred_final))
precision recall f1-score support
0 0.00 0.00 0.00 3
1 0.12 0.31 0.18 16
2 0.71 0.53 0.61 173
3 0.53 0.50 0.51 161
4 0.38 0.54 0.45 50
5 0.00 0.00 0.00 5
accuracy 0.50 408
macro avg 0.29 0.31 0.29 408
weighted avg 0.56 0.50 0.52 408
a
def printResult(result):
'''
This method will print score in the form of table. It accept the dictionary object which has all the scores
Param: result
return Dataframe coming out from result
'''
result1 = pd.DataFrame(np.array(list(result.values()))[:,:-2], # make a dataframe out of the metrics from result dictionary
columns= ['Loss','Accuracy'],
index= result.keys()) # use the model names as index
result1.index.name = 'Model' # name the index of the result1 dataframe as 'Model'
return result1
printResult(result)
| Loss | Accuracy | |
|---|---|---|
| Model | ||
| Default NN with Original Dataset | 1.039907 | 0.585784 |
| Hypertuned Default NN with Original Dataset | 0.984771 | 0.593137 |
| Hypertuned Default NN with Original Dataset with Class Weights | 1.243387 | 0.455882 |
| Hypertuned Default NN with Original Dataset With SMOTE | 1.367739 | 0.526961 |
If I consider Accuracy, then "Hypertuned Default NN with Original Dataset" is giving better accuracy. Hence ploting the accuracy and loss plot
E. Plot visuals as in Q3.C and share insights about difference observed in both the models. [3 Marks]
fig, axs = plt.subplots(2, 4, figsize = (40,6))
row = 0;
for i, key in enumerate(result.keys()):
axs[row][i].plot(pd.DataFrame( result[key][-1].history)['loss'])
axs[row][i].plot(pd.DataFrame(result[key][-1].history)['val_loss'])
axs[row][i].legend(['Training Loss', 'Validation Loss'])
axs[row][i].title.set_text(key)
#plt.show()
row = row+1;
for i, key in enumerate(result.keys()):
axs[row][i].plot(pd.DataFrame( result[key][-1].history)['accuracy'])
axs[row][i].plot(pd.DataFrame(result[key][-1].history)['val_accuracy'])
axs[row][i].legend(['Training Accuracy', 'Validation Accuracy'])
axs[row][i].title.set_text(key)
plt.show()
Insight:
Loss is much lower in Hypertuned Default NN with Original Dataset.
Model will correctly identied the minority class if there are good amount of data present.
Part B
DOMAIN: Autonomous Vehicles
PROJECT OBJECTIVE: To build a digit classifier on the SVHN (Street View Housing Number) dataset. Steps and tasks:
1. Data Import and Exploration [5 Marks]
A. Read the .h5 file and assign to a variable. [2 Marks]
import h5py
A. Read the .h5 file and assign to a variable.
data = h5py.File('/content/drive/My Drive/Colab Notebooks/DeepLearning/Assignment/Autonomous_Vehicles_SVHN_single_grey1.h5', 'r')
B. Print all the keys from the .h5 file. [1 Marks]
list(data.keys())
['X_test', 'X_train', 'X_val', 'y_test', 'y_train', 'y_val']
C. Split the data into X_train, X_test, Y_train, Y_test [2 Marks]
X_train, X_test, y_train, y_test = data['X_train'],data['X_test'] ,data['y_train'] ,data['y_test']
A. Print shape of all the 4 data split into x, y, train, test to verify if x & y is in sync.
print('Shape of X_train ', X_train.shape)
print('Shape of X_test ', X_test.shape)
print('Shape of y_train ', y_train.shape)
print('Shape of y_test ', y_test.shape)
Shape of X_train (42000, 32, 32) Shape of X_test (18000, 32, 32) Shape of y_train (42000,) Shape of y_test (18000,)
B. Visualise first 10 images in train data and print its corresponding labels
plt.figure(figsize=(15,6));
for i in range(10):
plt.subplot(1,10, i+1)
plt.imshow(X_train[i], cmap="gray")
plt.axis('off')
plt.show()
print('Label of Image is ', y_train[0:10])
Label of Image is [2 6 7 4 4 0 3 0 7 3]
C. Reshape all the images with appropriate shape update the data in same variable.
image_vector_size = 32*32
X_train = np.asarray(X_train).reshape(X_train.shape[0], image_vector_size)
X_test = np.asarray(X_test).reshape(X_test.shape[0], image_vector_size)
D. Normalise the images i.e. Normalise the pixel values.
X_train = X_train / 255.0
X_test = X_test / 255.0
print('Training set', X_train.shape, y_train.shape)
print('Test set', X_test.shape, y_test.shape)
Training set (42000, 1024) (42000,) Test set (18000, 1024) (18000,)
E. Transform Labels into format acceptable by Neural Network
num_of_classes=10
y_train_cal = to_categorical(y_train,num_classes=num_of_classes)
y_test_cal = to_categorical(y_test, num_classes=num_of_classes)
F. Print total Number of classes in the Dataset.
print('total Number of classes in the Dataset. ' + str(len(np.unique(y_test))))
total Number of classes in the Dataset. 10
A. Design a Neural Network to train a classifier.
image_size=32*32
model = Sequential();
model.add(Dense(units=128, activation='relu', kernel_initializer= 'he_uniform', input_shape=(image_size,) ))
model.add(Dense(units=64, activation='relu', kernel_initializer= 'he_uniform'))
model.add(Dense(units=32, activation='relu', kernel_initializer= 'he_uniform'))
model.add(Dense(num_of_classes, activation='softmax'))
adam = Adam(lr=1e-3)
model.compile(loss="categorical_crossentropy", optimizer=adam, metrics=['accuracy']) ### Loss function = Categorical cross entropy
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_4 (Dense) (None, 128) 131200
dense_5 (Dense) (None, 64) 8256
dense_6 (Dense) (None, 32) 2080
dense_7 (Dense) (None, 10) 330
=================================================================
Total params: 141,866
Trainable params: 141,866
Non-trainable params: 0
_________________________________________________________________
checkpoint = ModelCheckpoint("model_weights.h5",monitor='val_accuracy',
save_weights_only=True, mode='max',verbose=1)
callbacks = [checkpoint,reduce_lr, stop_early]
B. Train the classifier using previously designed Architecture
# Fit the model
history=model.fit(X_train, y_train_cal, validation_data=(X_test, y_test_cal), epochs=100, batch_size=128, verbose=2,callbacks=callbacks)
Epoch 1/100 Epoch 1: saving model to model_weights.h5 329/329 - 2s - loss: 2.2395 - accuracy: 0.1499 - val_loss: 2.0941 - val_accuracy: 0.2260 - lr: 0.0010 - 2s/epoch - 5ms/step Epoch 2/100 Epoch 2: saving model to model_weights.h5 329/329 - 1s - loss: 1.7784 - accuracy: 0.3932 - val_loss: 1.5347 - val_accuracy: 0.4906 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 3/100 Epoch 3: saving model to model_weights.h5 329/329 - 1s - loss: 1.3096 - accuracy: 0.5860 - val_loss: 1.2397 - val_accuracy: 0.6104 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 4/100 Epoch 4: saving model to model_weights.h5 329/329 - 1s - loss: 1.2045 - accuracy: 0.6192 - val_loss: 1.1988 - val_accuracy: 0.6221 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 5/100 Epoch 5: saving model to model_weights.h5 329/329 - 1s - loss: 1.1386 - accuracy: 0.6436 - val_loss: 1.1244 - val_accuracy: 0.6500 - lr: 0.0010 - 998ms/epoch - 3ms/step Epoch 6/100 Epoch 6: saving model to model_weights.h5 329/329 - 1s - loss: 1.0857 - accuracy: 0.6606 - val_loss: 1.0568 - val_accuracy: 0.6756 - lr: 0.0010 - 997ms/epoch - 3ms/step Epoch 7/100 Epoch 7: saving model to model_weights.h5 329/329 - 1s - loss: 1.0389 - accuracy: 0.6789 - val_loss: 1.0224 - val_accuracy: 0.6803 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 8/100 Epoch 8: saving model to model_weights.h5 329/329 - 1s - loss: 1.0090 - accuracy: 0.6899 - val_loss: 1.0214 - val_accuracy: 0.6838 - lr: 0.0010 - 997ms/epoch - 3ms/step Epoch 9/100 Epoch 9: saving model to model_weights.h5 329/329 - 1s - loss: 0.9734 - accuracy: 0.7023 - val_loss: 0.9476 - val_accuracy: 0.7095 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 10/100 Epoch 10: saving model to model_weights.h5 329/329 - 1s - loss: 0.9442 - accuracy: 0.7107 - val_loss: 0.9958 - val_accuracy: 0.6877 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 11/100 Epoch 11: saving model to model_weights.h5 329/329 - 1s - loss: 0.9225 - accuracy: 0.7170 - val_loss: 0.9227 - val_accuracy: 0.7204 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 12/100 Epoch 12: saving model to model_weights.h5 329/329 - 1s - loss: 0.8976 - accuracy: 0.7259 - val_loss: 0.8965 - val_accuracy: 0.7272 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 13/100 Epoch 13: saving model to model_weights.h5 329/329 - 1s - loss: 0.8751 - accuracy: 0.7340 - val_loss: 0.9664 - val_accuracy: 0.7064 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 14/100 Epoch 14: saving model to model_weights.h5 329/329 - 1s - loss: 0.8621 - accuracy: 0.7361 - val_loss: 0.8865 - val_accuracy: 0.7280 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 15/100 Epoch 15: saving model to model_weights.h5 329/329 - 1s - loss: 0.8423 - accuracy: 0.7425 - val_loss: 0.8841 - val_accuracy: 0.7329 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 16/100 Epoch 16: saving model to model_weights.h5 329/329 - 1s - loss: 0.8189 - accuracy: 0.7500 - val_loss: 0.8788 - val_accuracy: 0.7303 - lr: 0.0010 - 998ms/epoch - 3ms/step Epoch 17/100 Epoch 17: saving model to model_weights.h5 329/329 - 1s - loss: 0.8145 - accuracy: 0.7505 - val_loss: 0.8933 - val_accuracy: 0.7294 - lr: 0.0010 - 996ms/epoch - 3ms/step Epoch 18/100 Epoch 18: saving model to model_weights.h5 329/329 - 1s - loss: 0.7939 - accuracy: 0.7582 - val_loss: 0.8298 - val_accuracy: 0.7506 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 19/100 Epoch 19: saving model to model_weights.h5 329/329 - 1s - loss: 0.7948 - accuracy: 0.7583 - val_loss: 0.8216 - val_accuracy: 0.7552 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 20/100 Epoch 20: saving model to model_weights.h5 329/329 - 1s - loss: 0.7635 - accuracy: 0.7680 - val_loss: 0.8468 - val_accuracy: 0.7438 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 21/100 Epoch 21: saving model to model_weights.h5 329/329 - 1s - loss: 0.7556 - accuracy: 0.7678 - val_loss: 0.8057 - val_accuracy: 0.7611 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 22/100 Epoch 22: saving model to model_weights.h5 329/329 - 1s - loss: 0.7443 - accuracy: 0.7747 - val_loss: 0.8025 - val_accuracy: 0.7602 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 23/100 Epoch 23: saving model to model_weights.h5 329/329 - 1s - loss: 0.7312 - accuracy: 0.7777 - val_loss: 0.8137 - val_accuracy: 0.7578 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 24/100 Epoch 24: saving model to model_weights.h5 329/329 - 1s - loss: 0.7257 - accuracy: 0.7797 - val_loss: 0.8818 - val_accuracy: 0.7327 - lr: 0.0010 - 997ms/epoch - 3ms/step Epoch 25/100 Epoch 25: saving model to model_weights.h5 329/329 - 1s - loss: 0.7219 - accuracy: 0.7805 - val_loss: 0.7446 - val_accuracy: 0.7798 - lr: 0.0010 - 991ms/epoch - 3ms/step Epoch 26/100 Epoch 26: saving model to model_weights.h5 329/329 - 1s - loss: 0.7013 - accuracy: 0.7858 - val_loss: 0.8042 - val_accuracy: 0.7617 - lr: 0.0010 - 993ms/epoch - 3ms/step Epoch 27/100 Epoch 27: saving model to model_weights.h5 329/329 - 1s - loss: 0.7000 - accuracy: 0.7882 - val_loss: 0.7590 - val_accuracy: 0.7754 - lr: 0.0010 - 991ms/epoch - 3ms/step Epoch 28/100 Epoch 28: saving model to model_weights.h5 329/329 - 1s - loss: 0.6926 - accuracy: 0.7883 - val_loss: 0.7744 - val_accuracy: 0.7735 - lr: 0.0010 - 999ms/epoch - 3ms/step Epoch 29/100 Epoch 29: saving model to model_weights.h5 329/329 - 1s - loss: 0.6875 - accuracy: 0.7920 - val_loss: 0.7777 - val_accuracy: 0.7730 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 30/100 Epoch 30: saving model to model_weights.h5 329/329 - 1s - loss: 0.6726 - accuracy: 0.7960 - val_loss: 0.8104 - val_accuracy: 0.7615 - lr: 0.0010 - 1s/epoch - 3ms/step Epoch 31/100 Epoch 31: saving model to model_weights.h5 329/329 - 1s - loss: 0.6066 - accuracy: 0.8182 - val_loss: 0.6939 - val_accuracy: 0.7992 - lr: 1.0000e-04 - 998ms/epoch - 3ms/step Epoch 32/100 Epoch 32: saving model to model_weights.h5 329/329 - 1s - loss: 0.5981 - accuracy: 0.8214 - val_loss: 0.6925 - val_accuracy: 0.7991 - lr: 1.0000e-04 - 999ms/epoch - 3ms/step Epoch 33/100 Epoch 33: saving model to model_weights.h5 329/329 - 1s - loss: 0.5975 - accuracy: 0.8212 - val_loss: 0.6926 - val_accuracy: 0.8002 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 34/100 Epoch 34: saving model to model_weights.h5 329/329 - 1s - loss: 0.5966 - accuracy: 0.8217 - val_loss: 0.6920 - val_accuracy: 0.8006 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 35/100 Epoch 35: saving model to model_weights.h5 329/329 - 1s - loss: 0.5949 - accuracy: 0.8223 - val_loss: 0.6915 - val_accuracy: 0.7989 - lr: 1.0000e-04 - 991ms/epoch - 3ms/step Epoch 36/100 Epoch 36: saving model to model_weights.h5 329/329 - 1s - loss: 0.5934 - accuracy: 0.8224 - val_loss: 0.6895 - val_accuracy: 0.7999 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 37/100 Epoch 37: saving model to model_weights.h5 329/329 - 1s - loss: 0.5921 - accuracy: 0.8234 - val_loss: 0.6913 - val_accuracy: 0.8003 - lr: 1.0000e-04 - 1000ms/epoch - 3ms/step Epoch 38/100 Epoch 38: saving model to model_weights.h5 329/329 - 1s - loss: 0.5917 - accuracy: 0.8249 - val_loss: 0.6920 - val_accuracy: 0.7990 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 39/100 Epoch 39: saving model to model_weights.h5 329/329 - 1s - loss: 0.5911 - accuracy: 0.8241 - val_loss: 0.6923 - val_accuracy: 0.7994 - lr: 1.0000e-04 - 988ms/epoch - 3ms/step Epoch 40/100 Epoch 40: saving model to model_weights.h5 329/329 - 1s - loss: 0.5887 - accuracy: 0.8246 - val_loss: 0.6970 - val_accuracy: 0.7986 - lr: 1.0000e-04 - 995ms/epoch - 3ms/step Epoch 41/100 Epoch 41: saving model to model_weights.h5 329/329 - 1s - loss: 0.5894 - accuracy: 0.8240 - val_loss: 0.6878 - val_accuracy: 0.8014 - lr: 1.0000e-04 - 999ms/epoch - 3ms/step Epoch 42/100 Epoch 42: saving model to model_weights.h5 329/329 - 1s - loss: 0.5874 - accuracy: 0.8240 - val_loss: 0.6889 - val_accuracy: 0.8006 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 43/100 Epoch 43: saving model to model_weights.h5 329/329 - 1s - loss: 0.5867 - accuracy: 0.8259 - val_loss: 0.6877 - val_accuracy: 0.8007 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 44/100 Epoch 44: saving model to model_weights.h5 329/329 - 1s - loss: 0.5852 - accuracy: 0.8251 - val_loss: 0.6926 - val_accuracy: 0.8001 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 45/100 Epoch 45: saving model to model_weights.h5 329/329 - 1s - loss: 0.5854 - accuracy: 0.8254 - val_loss: 0.6887 - val_accuracy: 0.8027 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 46/100 Epoch 46: saving model to model_weights.h5 329/329 - 1s - loss: 0.5839 - accuracy: 0.8265 - val_loss: 0.6875 - val_accuracy: 0.8025 - lr: 1.0000e-04 - 993ms/epoch - 3ms/step Epoch 47/100 Epoch 47: saving model to model_weights.h5 329/329 - 1s - loss: 0.5818 - accuracy: 0.8257 - val_loss: 0.6890 - val_accuracy: 0.8008 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 48/100 Epoch 48: saving model to model_weights.h5 329/329 - 1s - loss: 0.5818 - accuracy: 0.8270 - val_loss: 0.6872 - val_accuracy: 0.8019 - lr: 1.0000e-04 - 1000ms/epoch - 3ms/step Epoch 49/100 Epoch 49: saving model to model_weights.h5 329/329 - 1s - loss: 0.5813 - accuracy: 0.8267 - val_loss: 0.6886 - val_accuracy: 0.8004 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 50/100 Epoch 50: saving model to model_weights.h5 329/329 - 1s - loss: 0.5802 - accuracy: 0.8273 - val_loss: 0.6872 - val_accuracy: 0.8024 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 51/100 Epoch 51: saving model to model_weights.h5 329/329 - 1s - loss: 0.5789 - accuracy: 0.8288 - val_loss: 0.6874 - val_accuracy: 0.8031 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 52/100 Epoch 52: saving model to model_weights.h5 329/329 - 1s - loss: 0.5778 - accuracy: 0.8281 - val_loss: 0.6906 - val_accuracy: 0.8008 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 53/100 Epoch 53: saving model to model_weights.h5 329/329 - 1s - loss: 0.5773 - accuracy: 0.8272 - val_loss: 0.6853 - val_accuracy: 0.8028 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 54/100 Epoch 54: saving model to model_weights.h5 329/329 - 1s - loss: 0.5763 - accuracy: 0.8280 - val_loss: 0.6820 - val_accuracy: 0.8046 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 55/100 Epoch 55: saving model to model_weights.h5 329/329 - 1s - loss: 0.5768 - accuracy: 0.8285 - val_loss: 0.6882 - val_accuracy: 0.8017 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 56/100 Epoch 56: saving model to model_weights.h5 329/329 - 1s - loss: 0.5749 - accuracy: 0.8302 - val_loss: 0.6856 - val_accuracy: 0.8030 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 57/100 Epoch 57: saving model to model_weights.h5 329/329 - 1s - loss: 0.5734 - accuracy: 0.8296 - val_loss: 0.6892 - val_accuracy: 0.8014 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 58/100 Epoch 58: saving model to model_weights.h5 329/329 - 1s - loss: 0.5720 - accuracy: 0.8294 - val_loss: 0.6823 - val_accuracy: 0.8054 - lr: 1.0000e-04 - 1s/epoch - 3ms/step Epoch 59/100 Epoch 59: saving model to model_weights.h5 329/329 - 1s - loss: 0.5708 - accuracy: 0.8292 - val_loss: 0.6834 - val_accuracy: 0.8046 - lr: 1.0000e-04 - 995ms/epoch - 3ms/step Epoch 60/100 Epoch 60: saving model to model_weights.h5 329/329 - 1s - loss: 0.5627 - accuracy: 0.8323 - val_loss: 0.6784 - val_accuracy: 0.8049 - lr: 1.0000e-05 - 998ms/epoch - 3ms/step Epoch 61/100 Epoch 61: saving model to model_weights.h5 329/329 - 1s - loss: 0.5616 - accuracy: 0.8338 - val_loss: 0.6775 - val_accuracy: 0.8057 - lr: 1.0000e-05 - 992ms/epoch - 3ms/step Epoch 62/100 Epoch 62: saving model to model_weights.h5 329/329 - 1s - loss: 0.5616 - accuracy: 0.8335 - val_loss: 0.6779 - val_accuracy: 0.8054 - lr: 1.0000e-05 - 988ms/epoch - 3ms/step Epoch 63/100 Epoch 63: saving model to model_weights.h5 329/329 - 1s - loss: 0.5613 - accuracy: 0.8335 - val_loss: 0.6775 - val_accuracy: 0.8049 - lr: 1.0000e-05 - 1s/epoch - 3ms/step Epoch 64/100 Epoch 64: saving model to model_weights.h5 329/329 - 1s - loss: 0.5613 - accuracy: 0.8340 - val_loss: 0.6775 - val_accuracy: 0.8056 - lr: 1.0000e-05 - 1s/epoch - 3ms/step Epoch 65/100 Epoch 65: saving model to model_weights.h5 329/329 - 1s - loss: 0.5612 - accuracy: 0.8342 - val_loss: 0.6776 - val_accuracy: 0.8053 - lr: 1.0000e-05 - 1s/epoch - 3ms/step Epoch 66/100 Epoch 66: saving model to model_weights.h5 329/329 - 1s - loss: 0.5609 - accuracy: 0.8341 - val_loss: 0.6777 - val_accuracy: 0.8047 - lr: 1.0000e-05 - 998ms/epoch - 3ms/step Epoch 67/100 Epoch 67: saving model to model_weights.h5 329/329 - 1s - loss: 0.5609 - accuracy: 0.8336 - val_loss: 0.6779 - val_accuracy: 0.8053 - lr: 1.0000e-05 - 1s/epoch - 3ms/step Epoch 68/100 Epoch 68: saving model to model_weights.h5 329/329 - 1s - loss: 0.5606 - accuracy: 0.8334 - val_loss: 0.6781 - val_accuracy: 0.8052 - lr: 1.0000e-05 - 1s/epoch - 3ms/step
Predicting the element using argmax function .. Model is giving probability of each class in target variable and using argmax, the maximum probability value will be considered.
# predicting the model on test data
y_pred=model.predict(X_test)
# As our outputs are probabilities so we will try to get the output class from these probablities by getting the maximum value
y_pred_final=[]
for i in y_pred:
y_pred_final.append(np.argmax(i))
loss, accuracy = model.evaluate(X_test, y_test_cal)
print('Test accuracy : ', accuracy)
563/563 [==============================] - 1s 2ms/step - loss: 0.6781 - accuracy: 0.8052 Test accuracy : 0.8051666617393494
from sklearn.metrics import classification_report
print(classification_report(y_test,y_pred_final))
precision recall f1-score support
0 0.83 0.84 0.84 1814
1 0.80 0.84 0.82 1828
2 0.83 0.81 0.82 1803
3 0.76 0.76 0.76 1719
4 0.83 0.84 0.84 1812
5 0.78 0.80 0.79 1768
6 0.81 0.80 0.80 1832
7 0.83 0.84 0.84 1808
8 0.80 0.74 0.77 1812
9 0.79 0.77 0.78 1804
accuracy 0.81 18000
macro avg 0.80 0.80 0.80 18000
weighted avg 0.81 0.81 0.80 18000
Observations:
Lets do hypertuning and finding the best neural network
def build_model(hp):
model = Sequential()
for layer in range(hp.Int('num_layer', 1,5)):
model.add(Dense(units=hp.Int("Units_"+str(layer+1), min_value=16, max_value=512, step=32), activation= hp.Choice("activation"+str(layer+1), ["relu", 'LeakyReLU']), kernel_initializer='he_uniform'))
model.add(BatchNormalization())
model.add(Dropout(rate=hp.Float('dropout_' + str(layer + 1) ,
min_value=0.0,
max_value=0.9,
step=0.1)))
model.add(Dense(units=num_of_classes, activation='softmax'))
learning_rate = hp.Float("lr", min_value=1e-4, max_value=1e-2, sampling="log")
model.compile(optimizer=Adam(learning_rate=learning_rate),loss=hp.Choice("loss" ,["categorical_crossentropy"]),metrics=["accuracy"],)
return model
tuner = Hyperband(build_model, objective=kerastuner.Objective("val_accuracy", direction="max"), max_epochs=10, factor=3, hyperparameters=None, directory = 'svhn',project_name = 'tuning',overwrite=True)
tuner.search_space_summary()
tuner.search(X_train, y_train_cal,validation_data=(X_test,y_test_cal), callbacks=callbacks)
Trial 30 Complete [00h 01m 30s] val_accuracy: 0.16699999570846558 Best val_accuracy So Far: 0.750166654586792 Total elapsed time: 00h 17m 06s INFO:tensorflow:Oracle triggered exit
INFO:tensorflow:Oracle triggered exit
best_model = tuner.get_best_models()[0]
best_model
<keras.engine.sequential.Sequential at 0x7f7791ed9190>
history=best_model.fit(X_train, y_train_cal, validation_data=(X_test, y_test_cal), epochs=100, batch_size=256, verbose=2,callbacks=callbacks)
Epoch 1/100 Epoch 1: saving model to model_weights.h5 165/165 - 2s - loss: 0.8257 - accuracy: 0.7511 - val_loss: 0.7238 - val_accuracy: 0.7881 - lr: 4.7887e-04 - 2s/epoch - 13ms/step Epoch 2/100 Epoch 2: saving model to model_weights.h5 165/165 - 1s - loss: 0.7952 - accuracy: 0.7608 - val_loss: 0.6849 - val_accuracy: 0.7992 - lr: 4.7887e-04 - 879ms/epoch - 5ms/step Epoch 3/100 Epoch 3: saving model to model_weights.h5 165/165 - 1s - loss: 0.7729 - accuracy: 0.7682 - val_loss: 0.6821 - val_accuracy: 0.8004 - lr: 4.7887e-04 - 873ms/epoch - 5ms/step Epoch 4/100 Epoch 4: saving model to model_weights.h5 165/165 - 1s - loss: 0.7635 - accuracy: 0.7688 - val_loss: 0.6916 - val_accuracy: 0.7962 - lr: 4.7887e-04 - 880ms/epoch - 5ms/step Epoch 5/100 Epoch 5: saving model to model_weights.h5 165/165 - 1s - loss: 0.7540 - accuracy: 0.7742 - val_loss: 0.6999 - val_accuracy: 0.7962 - lr: 4.7887e-04 - 867ms/epoch - 5ms/step Epoch 6/100 Epoch 6: saving model to model_weights.h5 165/165 - 1s - loss: 0.7384 - accuracy: 0.7771 - val_loss: 0.6694 - val_accuracy: 0.8031 - lr: 4.7887e-04 - 873ms/epoch - 5ms/step Epoch 7/100 Epoch 7: saving model to model_weights.h5 165/165 - 1s - loss: 0.7393 - accuracy: 0.7751 - val_loss: 0.6702 - val_accuracy: 0.8002 - lr: 4.7887e-04 - 848ms/epoch - 5ms/step Epoch 8/100 Epoch 8: saving model to model_weights.h5 165/165 - 1s - loss: 0.7274 - accuracy: 0.7789 - val_loss: 0.6691 - val_accuracy: 0.8017 - lr: 4.7887e-04 - 865ms/epoch - 5ms/step Epoch 9/100 Epoch 9: saving model to model_weights.h5 165/165 - 1s - loss: 0.7332 - accuracy: 0.7787 - val_loss: 0.7015 - val_accuracy: 0.7885 - lr: 4.7887e-04 - 1s/epoch - 6ms/step Epoch 10/100 Epoch 10: saving model to model_weights.h5 165/165 - 1s - loss: 0.7260 - accuracy: 0.7800 - val_loss: 0.6592 - val_accuracy: 0.8049 - lr: 4.7887e-04 - 1s/epoch - 6ms/step Epoch 11/100 Epoch 11: saving model to model_weights.h5 165/165 - 1s - loss: 0.7115 - accuracy: 0.7860 - val_loss: 0.6521 - val_accuracy: 0.8098 - lr: 4.7887e-04 - 869ms/epoch - 5ms/step Epoch 12/100 Epoch 12: saving model to model_weights.h5 165/165 - 1s - loss: 0.7010 - accuracy: 0.7865 - val_loss: 0.7177 - val_accuracy: 0.7842 - lr: 4.7887e-04 - 868ms/epoch - 5ms/step Epoch 13/100 Epoch 13: saving model to model_weights.h5 165/165 - 1s - loss: 0.7005 - accuracy: 0.7866 - val_loss: 0.6865 - val_accuracy: 0.7932 - lr: 4.7887e-04 - 867ms/epoch - 5ms/step Epoch 14/100 Epoch 14: saving model to model_weights.h5 165/165 - 1s - loss: 0.6925 - accuracy: 0.7895 - val_loss: 0.6701 - val_accuracy: 0.7999 - lr: 4.7887e-04 - 861ms/epoch - 5ms/step Epoch 15/100 Epoch 15: saving model to model_weights.h5 165/165 - 1s - loss: 0.6876 - accuracy: 0.7917 - val_loss: 0.6653 - val_accuracy: 0.8032 - lr: 4.7887e-04 - 862ms/epoch - 5ms/step Epoch 16/100 Epoch 16: saving model to model_weights.h5 165/165 - 1s - loss: 0.6793 - accuracy: 0.7916 - val_loss: 0.7045 - val_accuracy: 0.7947 - lr: 4.7887e-04 - 864ms/epoch - 5ms/step Epoch 17/100 Epoch 17: saving model to model_weights.h5 165/165 - 1s - loss: 0.6472 - accuracy: 0.8023 - val_loss: 0.5720 - val_accuracy: 0.8324 - lr: 4.7887e-05 - 854ms/epoch - 5ms/step Epoch 18/100 Epoch 18: saving model to model_weights.h5 165/165 - 1s - loss: 0.6404 - accuracy: 0.8063 - val_loss: 0.5712 - val_accuracy: 0.8322 - lr: 4.7887e-05 - 888ms/epoch - 5ms/step Epoch 19/100 Epoch 19: saving model to model_weights.h5 165/165 - 1s - loss: 0.6370 - accuracy: 0.8060 - val_loss: 0.5689 - val_accuracy: 0.8333 - lr: 4.7887e-05 - 863ms/epoch - 5ms/step Epoch 20/100 Epoch 20: saving model to model_weights.h5 165/165 - 1s - loss: 0.6355 - accuracy: 0.8074 - val_loss: 0.5634 - val_accuracy: 0.8351 - lr: 4.7887e-05 - 885ms/epoch - 5ms/step Epoch 21/100 Epoch 21: saving model to model_weights.h5 165/165 - 1s - loss: 0.6371 - accuracy: 0.8072 - val_loss: 0.5634 - val_accuracy: 0.8352 - lr: 4.7887e-05 - 1s/epoch - 7ms/step Epoch 22/100 Epoch 22: saving model to model_weights.h5 165/165 - 1s - loss: 0.6377 - accuracy: 0.8069 - val_loss: 0.5671 - val_accuracy: 0.8334 - lr: 4.7887e-05 - 1s/epoch - 8ms/step Epoch 23/100 Epoch 23: saving model to model_weights.h5 165/165 - 1s - loss: 0.6307 - accuracy: 0.8077 - val_loss: 0.5598 - val_accuracy: 0.8357 - lr: 4.7887e-05 - 1s/epoch - 7ms/step Epoch 24/100 Epoch 24: saving model to model_weights.h5 165/165 - 1s - loss: 0.6322 - accuracy: 0.8100 - val_loss: 0.5605 - val_accuracy: 0.8359 - lr: 4.7887e-05 - 1s/epoch - 7ms/step Epoch 25/100 Epoch 25: saving model to model_weights.h5 165/165 - 1s - loss: 0.6426 - accuracy: 0.8052 - val_loss: 0.5669 - val_accuracy: 0.8332 - lr: 4.7887e-05 - 1s/epoch - 6ms/step Epoch 26/100 Epoch 26: saving model to model_weights.h5 165/165 - 1s - loss: 0.6270 - accuracy: 0.8103 - val_loss: 0.5565 - val_accuracy: 0.8359 - lr: 4.7887e-05 - 1s/epoch - 7ms/step Epoch 27/100 Epoch 27: saving model to model_weights.h5 165/165 - 1s - loss: 0.6303 - accuracy: 0.8101 - val_loss: 0.5603 - val_accuracy: 0.8356 - lr: 4.7887e-05 - 877ms/epoch - 5ms/step Epoch 28/100 Epoch 28: saving model to model_weights.h5 165/165 - 1s - loss: 0.6225 - accuracy: 0.8108 - val_loss: 0.5601 - val_accuracy: 0.8365 - lr: 4.7887e-05 - 863ms/epoch - 5ms/step Epoch 29/100 Epoch 29: saving model to model_weights.h5 165/165 - 1s - loss: 0.6225 - accuracy: 0.8119 - val_loss: 0.5592 - val_accuracy: 0.8371 - lr: 4.7887e-05 - 867ms/epoch - 5ms/step Epoch 30/100 Epoch 30: saving model to model_weights.h5 165/165 - 1s - loss: 0.6292 - accuracy: 0.8074 - val_loss: 0.5581 - val_accuracy: 0.8355 - lr: 4.7887e-05 - 877ms/epoch - 5ms/step Epoch 31/100 Epoch 31: saving model to model_weights.h5 165/165 - 1s - loss: 0.6245 - accuracy: 0.8101 - val_loss: 0.5578 - val_accuracy: 0.8361 - lr: 4.7887e-05 - 865ms/epoch - 5ms/step Epoch 32/100 Epoch 32: saving model to model_weights.h5 165/165 - 1s - loss: 0.6155 - accuracy: 0.8123 - val_loss: 0.5488 - val_accuracy: 0.8394 - lr: 1.0000e-05 - 877ms/epoch - 5ms/step Epoch 33/100 Epoch 33: saving model to model_weights.h5 165/165 - 1s - loss: 0.6178 - accuracy: 0.8121 - val_loss: 0.5491 - val_accuracy: 0.8397 - lr: 1.0000e-05 - 870ms/epoch - 5ms/step Epoch 34/100 Epoch 34: saving model to model_weights.h5 165/165 - 1s - loss: 0.6249 - accuracy: 0.8081 - val_loss: 0.5492 - val_accuracy: 0.8389 - lr: 1.0000e-05 - 884ms/epoch - 5ms/step Epoch 35/100 Epoch 35: saving model to model_weights.h5 165/165 - 1s - loss: 0.6216 - accuracy: 0.8126 - val_loss: 0.5495 - val_accuracy: 0.8389 - lr: 1.0000e-05 - 862ms/epoch - 5ms/step Epoch 36/100 Epoch 36: saving model to model_weights.h5 165/165 - 1s - loss: 0.6178 - accuracy: 0.8125 - val_loss: 0.5489 - val_accuracy: 0.8388 - lr: 1.0000e-05 - 889ms/epoch - 5ms/step Epoch 37/100 Epoch 37: saving model to model_weights.h5 165/165 - 1s - loss: 0.6200 - accuracy: 0.8124 - val_loss: 0.5485 - val_accuracy: 0.8402 - lr: 1.0000e-05 - 869ms/epoch - 5ms/step Epoch 38/100 Epoch 38: saving model to model_weights.h5 165/165 - 1s - loss: 0.6189 - accuracy: 0.8111 - val_loss: 0.5477 - val_accuracy: 0.8391 - lr: 1.0000e-05 - 872ms/epoch - 5ms/step Epoch 39/100 Epoch 39: saving model to model_weights.h5 165/165 - 1s - loss: 0.6211 - accuracy: 0.8123 - val_loss: 0.5481 - val_accuracy: 0.8399 - lr: 1.0000e-05 - 866ms/epoch - 5ms/step Epoch 40/100 Epoch 40: saving model to model_weights.h5 165/165 - 1s - loss: 0.6188 - accuracy: 0.8124 - val_loss: 0.5480 - val_accuracy: 0.8402 - lr: 1.0000e-05 - 1s/epoch - 7ms/step Epoch 41/100 Epoch 41: saving model to model_weights.h5 165/165 - 1s - loss: 0.6159 - accuracy: 0.8114 - val_loss: 0.5481 - val_accuracy: 0.8393 - lr: 1.0000e-05 - 1s/epoch - 6ms/step Epoch 42/100 Epoch 42: saving model to model_weights.h5 165/165 - 1s - loss: 0.6154 - accuracy: 0.8110 - val_loss: 0.5476 - val_accuracy: 0.8409 - lr: 1.0000e-05 - 870ms/epoch - 5ms/step Epoch 43/100 Epoch 43: saving model to model_weights.h5 165/165 - 1s - loss: 0.6154 - accuracy: 0.8130 - val_loss: 0.5480 - val_accuracy: 0.8393 - lr: 1.0000e-05 - 1s/epoch - 7ms/step Epoch 44/100 Epoch 44: saving model to model_weights.h5 165/165 - 1s - loss: 0.6208 - accuracy: 0.8107 - val_loss: 0.5468 - val_accuracy: 0.8395 - lr: 1.0000e-05 - 1s/epoch - 7ms/step Epoch 45/100 Epoch 45: saving model to model_weights.h5 165/165 - 1s - loss: 0.6136 - accuracy: 0.8130 - val_loss: 0.5471 - val_accuracy: 0.8407 - lr: 1.0000e-05 - 1s/epoch - 6ms/step Epoch 46/100 Epoch 46: saving model to model_weights.h5 165/165 - 1s - loss: 0.6134 - accuracy: 0.8126 - val_loss: 0.5469 - val_accuracy: 0.8400 - lr: 1.0000e-05 - 1s/epoch - 6ms/step Epoch 47/100 Epoch 47: saving model to model_weights.h5 165/165 - 1s - loss: 0.6216 - accuracy: 0.8132 - val_loss: 0.5464 - val_accuracy: 0.8406 - lr: 1.0000e-05 - 996ms/epoch - 6ms/step Epoch 48/100 Epoch 48: saving model to model_weights.h5 165/165 - 1s - loss: 0.6245 - accuracy: 0.8102 - val_loss: 0.5472 - val_accuracy: 0.8388 - lr: 1.0000e-05 - 874ms/epoch - 5ms/step Epoch 49/100 Epoch 49: saving model to model_weights.h5 165/165 - 1s - loss: 0.6177 - accuracy: 0.8123 - val_loss: 0.5462 - val_accuracy: 0.8399 - lr: 1.0000e-05 - 870ms/epoch - 5ms/step Epoch 50/100 Epoch 50: saving model to model_weights.h5 165/165 - 1s - loss: 0.6192 - accuracy: 0.8120 - val_loss: 0.5468 - val_accuracy: 0.8405 - lr: 1.0000e-05 - 866ms/epoch - 5ms/step Epoch 51/100 Epoch 51: saving model to model_weights.h5 165/165 - 1s - loss: 0.6200 - accuracy: 0.8110 - val_loss: 0.5459 - val_accuracy: 0.8411 - lr: 1.0000e-05 - 867ms/epoch - 5ms/step Epoch 52/100 Epoch 52: saving model to model_weights.h5 165/165 - 1s - loss: 0.6145 - accuracy: 0.8129 - val_loss: 0.5462 - val_accuracy: 0.8389 - lr: 1.0000e-05 - 861ms/epoch - 5ms/step Epoch 53/100 Epoch 53: saving model to model_weights.h5 165/165 - 1s - loss: 0.6164 - accuracy: 0.8122 - val_loss: 0.5466 - val_accuracy: 0.8401 - lr: 1.0000e-05 - 860ms/epoch - 5ms/step Epoch 54/100 Epoch 54: saving model to model_weights.h5 165/165 - 1s - loss: 0.6147 - accuracy: 0.8133 - val_loss: 0.5468 - val_accuracy: 0.8392 - lr: 1.0000e-05 - 881ms/epoch - 5ms/step Epoch 55/100 Epoch 55: saving model to model_weights.h5 165/165 - 1s - loss: 0.6165 - accuracy: 0.8121 - val_loss: 0.5447 - val_accuracy: 0.8411 - lr: 1.0000e-05 - 866ms/epoch - 5ms/step Epoch 56/100 Epoch 56: saving model to model_weights.h5 165/165 - 1s - loss: 0.6143 - accuracy: 0.8130 - val_loss: 0.5450 - val_accuracy: 0.8398 - lr: 1.0000e-05 - 857ms/epoch - 5ms/step Epoch 57/100 Epoch 57: saving model to model_weights.h5 165/165 - 1s - loss: 0.6102 - accuracy: 0.8139 - val_loss: 0.5438 - val_accuracy: 0.8414 - lr: 1.0000e-05 - 874ms/epoch - 5ms/step Epoch 58/100 Epoch 58: saving model to model_weights.h5 165/165 - 1s - loss: 0.6165 - accuracy: 0.8117 - val_loss: 0.5444 - val_accuracy: 0.8412 - lr: 1.0000e-05 - 1s/epoch - 6ms/step Epoch 59/100 Epoch 59: saving model to model_weights.h5 165/165 - 1s - loss: 0.6125 - accuracy: 0.8124 - val_loss: 0.5453 - val_accuracy: 0.8411 - lr: 1.0000e-05 - 1s/epoch - 6ms/step Epoch 60/100 Epoch 60: saving model to model_weights.h5 165/165 - 1s - loss: 0.6101 - accuracy: 0.8156 - val_loss: 0.5449 - val_accuracy: 0.8403 - lr: 1.0000e-05 - 870ms/epoch - 5ms/step Epoch 61/100 Epoch 61: saving model to model_weights.h5 165/165 - 1s - loss: 0.6104 - accuracy: 0.8125 - val_loss: 0.5444 - val_accuracy: 0.8409 - lr: 1.0000e-05 - 854ms/epoch - 5ms/step Epoch 62/100 Epoch 62: saving model to model_weights.h5 165/165 - 1s - loss: 0.6119 - accuracy: 0.8129 - val_loss: 0.5457 - val_accuracy: 0.8393 - lr: 1.0000e-05 - 862ms/epoch - 5ms/step Epoch 63/100 Epoch 63: saving model to model_weights.h5 165/165 - 1s - loss: 0.6164 - accuracy: 0.8125 - val_loss: 0.5443 - val_accuracy: 0.8400 - lr: 1.0000e-05 - 870ms/epoch - 5ms/step Epoch 64/100 Epoch 64: saving model to model_weights.h5 165/165 - 1s - loss: 0.6119 - accuracy: 0.8142 - val_loss: 0.5439 - val_accuracy: 0.8406 - lr: 1.0000e-05 - 861ms/epoch - 5ms/step
# predicting the model on test data
y_pred=best_model.predict(X_test)
# As our outputs are probabilities so we will try to get the output class from these probablities by getting the maximum value
y_pred_final=[]
for i in y_pred:
y_pred_final.append(np.argmax(i))
C. Evaluate performance of the model with appropriate metrics.
loss, accuracy = best_model.evaluate(X_test, y_test_cal)
print('Test accuracy : ', accuracy)
563/563 [==============================] - 2s 3ms/step - loss: 0.5439 - accuracy: 0.8406 Test accuracy : 0.8406111001968384
from sklearn.metrics import classification_report
print(classification_report(y_test,y_pred_final))
precision recall f1-score support
0 0.85 0.88 0.87 1814
1 0.83 0.86 0.84 1828
2 0.87 0.84 0.86 1803
3 0.81 0.79 0.80 1719
4 0.85 0.87 0.86 1812
5 0.80 0.85 0.82 1768
6 0.85 0.83 0.84 1832
7 0.86 0.87 0.86 1808
8 0.86 0.78 0.82 1812
9 0.84 0.82 0.83 1804
accuracy 0.84 18000
macro avg 0.84 0.84 0.84 18000
weighted avg 0.84 0.84 0.84 18000
from sklearn.metrics import confusion_matrix
import seaborn as sns
cm=confusion_matrix(y_test,y_pred_final)
plt.figure(figsize=(10,7))
sns.heatmap(cm,annot=True,fmt='d')
plt.xlabel('Predicted')
plt.ylabel('Truth')
plt.show()
Mis Classification Images
index = 0
misclassified_images = []
for label, predict in zip(y_test, y_pred_final):
if label != predict:
misclassified_images.append(index)
index +=1
print(misclassified_images)
print(len(misclassified_images))
[0, 7, 8, 11, 27, 40, 45, 46, 56, 58, 62, 64, 68, 89, 100, 108, 132, 136, 137, 138, 148, 150, 151, 154, 155, 158, 182, 183, 184, 186, 188, 200, 203, 210, 222, 223, 232, 251, 252, 253, 256, 267, 268, 274, 303, 323, 337, 346, 347, 351, 363, 372, 389, 398, 404, 415, 418, 437, 446, 448, 451, 456, 457, 489, 496, 498, 503, 504, 507, 510, 520, 527, 528, 530, 532, 535, 537, 551, 556, 560, 562, 563, 583, 586, 592, 600, 612, 619, 633, 640, 663, 664, 665, 690, 691, 694, 704, 706, 715, 728, 753, 768, 771, 780, 784, 787, 795, 797, 813, 814, 835, 838, 849, 856, 860, 862, 869, 871, 872, 882, 887, 891, 895, 896, 907, 908, 926, 935, 936, 942, 946, 948, 951, 953, 954, 955, 965, 985, 988, 993, 995, 997, 1001, 1006, 1009, 1021, 1025, 1026, 1029, 1045, 1046, 1050, 1051, 1057, 1060, 1073, 1074, 1080, 1085, 1086, 1089, 1106, 1109, 1112, 1123, 1126, 1132, 1133, 1137, 1139, 1142, 1147, 1148, 1150, 1155, 1159, 1180, 1186, 1189, 1204, 1205, 1207, 1208, 1220, 1223, 1227, 1232, 1244, 1252, 1255, 1256, 1260, 1279, 1291, 1292, 1296, 1301, 1313, 1320, 1341, 1348, 1353, 1357, 1363, 1366, 1375, 1376, 1378, 1390, 1407, 1410, 1414, 1421, 1425, 1431, 1433, 1437, 1449, 1452, 1466, 1477, 1479, 1484, 1492, 1494, 1495, 1525, 1532, 1537, 1540, 1546, 1549, 1550, 1552, 1559, 1573, 1574, 1580, 1582, 1584, 1604, 1606, 1613, 1615, 1627, 1635, 1645, 1655, 1663, 1676, 1688, 1700, 1722, 1734, 1746, 1749, 1762, 1770, 1774, 1818, 1827, 1835, 1837, 1838, 1844, 1846, 1850, 1863, 1865, 1875, 1886, 1889, 1890, 1891, 1900, 1905, 1906, 1907, 1910, 1911, 1912, 1913, 1914, 1928, 1936, 1946, 1952, 1955, 1957, 1964, 1971, 1973, 1983, 1985, 1995, 2001, 2006, 2008, 2020, 2023, 2025, 2030, 2031, 2041, 2050, 2055, 2060, 2064, 2070, 2072, 2081, 2083, 2099, 2102, 2104, 2107, 2115, 2116, 2117, 2127, 2128, 2152, 2158, 2159, 2167, 2168, 2172, 2178, 2184, 2187, 2191, 2212, 2219, 2234, 2236, 2238, 2248, 2278, 2283, 2287, 2291, 2294, 2297, 2309, 2320, 2328, 2329, 2335, 2336, 2338, 2339, 2349, 2363, 2368, 2370, 2381, 2387, 2388, 2390, 2398, 2404, 2405, 2412, 2417, 2424, 2430, 2437, 2438, 2441, 2448, 2449, 2454, 2456, 2464, 2469, 2477, 2479, 2488, 2493, 2498, 2506, 2509, 2523, 2524, 2532, 2542, 2549, 2552, 2553, 2555, 2559, 2574, 2582, 2591, 2593, 2596, 2600, 2601, 2603, 2605, 2608, 2611, 2614, 2617, 2619, 2620, 2625, 2628, 2632, 2634, 2636, 2648, 2651, 2652, 2654, 2659, 2661, 2670, 2691, 2692, 2700, 2703, 2706, 2707, 2711, 2717, 2718, 2722, 2731, 2733, 2739, 2763, 2766, 2780, 2790, 2794, 2796, 2803, 2804, 2810, 2811, 2812, 2834, 2845, 2855, 2856, 2858, 2859, 2873, 2875, 2881, 2882, 2900, 2903, 2911, 2923, 2925, 2931, 2934, 2937, 2945, 2957, 2962, 2971, 2979, 2990, 2996, 2999, 3002, 3012, 3013, 3017, 3018, 3022, 3041, 3047, 3050, 3060, 3066, 3072, 3073, 3081, 3086, 3094, 3097, 3102, 3112, 3123, 3128, 3132, 3139, 3144, 3147, 3153, 3156, 3157, 3159, 3160, 3163, 3166, 3167, 3168, 3170, 3183, 3186, 3190, 3201, 3214, 3221, 3235, 3243, 3245, 3250, 3254, 3257, 3268, 3276, 3288, 3291, 3306, 3313, 3328, 3330, 3333, 3334, 3338, 3340, 3372, 3383, 3386, 3392, 3393, 3394, 3397, 3399, 3402, 3404, 3418, 3424, 3428, 3430, 3432, 3435, 3436, 3437, 3443, 3448, 3452, 3454, 3455, 3462, 3463, 3469, 3471, 3473, 3474, 3477, 3487, 3507, 3516, 3521, 3534, 3539, 3542, 3556, 3576, 3579, 3589, 3592, 3595, 3596, 3608, 3624, 3642, 3643, 3650, 3652, 3662, 3671, 3687, 3694, 3697, 3698, 3715, 3721, 3723, 3736, 3740, 3751, 3755, 3757, 3764, 3765, 3768, 3769, 3772, 3779, 3786, 3799, 3809, 3816, 3821, 3823, 3842, 3853, 3854, 3871, 3880, 3888, 3892, 3895, 3903, 3906, 3907, 3915, 3938, 3951, 3958, 3976, 3977, 3989, 3992, 4007, 4021, 4027, 4030, 4034, 4035, 4044, 4045, 4046, 4058, 4061, 4063, 4072, 4075, 4078, 4083, 4091, 4095, 4098, 4130, 4131, 4135, 4139, 4143, 4145, 4146, 4156, 4157, 4164, 4170, 4177, 4179, 4181, 4182, 4184, 4186, 4194, 4209, 4216, 4217, 4218, 4219, 4226, 4241, 4247, 4248, 4251, 4260, 4261, 4267, 4300, 4301, 4305, 4311, 4328, 4339, 4343, 4344, 4352, 4353, 4354, 4373, 4380, 4388, 4390, 4395, 4401, 4418, 4428, 4431, 4434, 4439, 4445, 4446, 4448, 4459, 4461, 4466, 4470, 4483, 4487, 4502, 4509, 4516, 4528, 4539, 4545, 4546, 4557, 4560, 4567, 4569, 4571, 4572, 4588, 4610, 4622, 4623, 4624, 4641, 4642, 4644, 4645, 4647, 4651, 4652, 4659, 4660, 4661, 4662, 4667, 4673, 4675, 4679, 4687, 4695, 4700, 4701, 4708, 4709, 4723, 4724, 4727, 4728, 4730, 4736, 4742, 4743, 4752, 4755, 4756, 4787, 4789, 4798, 4805, 4815, 4816, 4827, 4832, 4833, 4835, 4853, 4854, 4858, 4862, 4878, 4879, 4887, 4899, 4900, 4902, 4932, 4946, 4949, 4960, 4962, 4985, 4989, 5000, 5033, 5036, 5040, 5049, 5058, 5062, 5068, 5072, 5075, 5086, 5088, 5117, 5138, 5143, 5147, 5154, 5155, 5159, 5161, 5166, 5175, 5176, 5182, 5188, 5192, 5199, 5211, 5215, 5227, 5233, 5239, 5247, 5256, 5260, 5264, 5282, 5285, 5291, 5293, 5302, 5312, 5317, 5318, 5322, 5335, 5338, 5340, 5343, 5345, 5353, 5366, 5370, 5373, 5391, 5393, 5398, 5401, 5404, 5416, 5432, 5433, 5446, 5451, 5454, 5461, 5467, 5499, 5500, 5511, 5513, 5523, 5530, 5534, 5539, 5541, 5549, 5557, 5563, 5577, 5589, 5591, 5592, 5595, 5596, 5602, 5616, 5625, 5627, 5630, 5656, 5658, 5660, 5661, 5665, 5666, 5679, 5680, 5697, 5704, 5710, 5715, 5718, 5721, 5724, 5729, 5732, 5735, 5742, 5748, 5762, 5766, 5767, 5773, 5776, 5777, 5778, 5779, 5783, 5784, 5791, 5793, 5803, 5814, 5819, 5823, 5829, 5835, 5837, 5838, 5839, 5841, 5845, 5847, 5852, 5853, 5857, 5859, 5863, 5865, 5872, 5875, 5878, 5880, 5924, 5929, 5934, 5935, 5945, 5948, 5953, 5955, 5979, 5987, 5991, 5992, 5996, 6000, 6003, 6006, 6022, 6030, 6056, 6057, 6063, 6073, 6075, 6080, 6096, 6099, 6100, 6108, 6114, 6119, 6127, 6132, 6144, 6145, 6152, 6166, 6175, 6181, 6184, 6194, 6195, 6199, 6200, 6204, 6207, 6218, 6219, 6221, 6223, 6224, 6235, 6243, 6249, 6253, 6260, 6262, 6269, 6277, 6280, 6283, 6285, 6291, 6295, 6301, 6302, 6304, 6308, 6309, 6320, 6322, 6334, 6336, 6349, 6350, 6360, 6364, 6365, 6366, 6370, 6375, 6378, 6384, 6385, 6388, 6402, 6405, 6408, 6421, 6440, 6441, 6443, 6445, 6458, 6475, 6483, 6486, 6488, 6491, 6492, 6507, 6515, 6516, 6520, 6524, 6531, 6542, 6546, 6552, 6569, 6571, 6581, 6587, 6600, 6602, 6603, 6609, 6612, 6629, 6650, 6655, 6662, 6665, 6671, 6684, 6686, 6687, 6701, 6715, 6716, 6725, 6729, 6737, 6738, 6756, 6759, 6761, 6788, 6804, 6807, 6821, 6832, 6841, 6842, 6852, 6853, 6854, 6855, 6856, 6860, 6875, 6882, 6887, 6889, 6891, 6894, 6896, 6898, 6902, 6911, 6913, 6921, 6934, 6935, 6937, 6939, 6941, 6949, 6952, 6960, 6963, 6970, 6982, 6992, 6998, 7003, 7035, 7048, 7056, 7062, 7065, 7082, 7106, 7120, 7122, 7127, 7143, 7146, 7149, 7150, 7152, 7155, 7157, 7158, 7168, 7173, 7174, 7176, 7187, 7189, 7201, 7218, 7221, 7222, 7226, 7231, 7235, 7246, 7251, 7253, 7259, 7264, 7265, 7289, 7314, 7318, 7324, 7336, 7341, 7343, 7347, 7352, 7360, 7365, 7366, 7369, 7372, 7374, 7399, 7419, 7432, 7434, 7435, 7436, 7466, 7471, 7476, 7477, 7482, 7484, 7495, 7498, 7499, 7504, 7507, 7515, 7526, 7529, 7532, 7544, 7545, 7582, 7584, 7587, 7597, 7602, 7617, 7618, 7619, 7622, 7623, 7624, 7630, 7631, 7632, 7636, 7642, 7644, 7662, 7664, 7672, 7681, 7687, 7689, 7692, 7705, 7724, 7730, 7761, 7766, 7769, 7772, 7778, 7781, 7795, 7800, 7801, 7817, 7835, 7838, 7841, 7863, 7864, 7866, 7871, 7878, 7879, 7886, 7888, 7895, 7899, 7903, 7907, 7908, 7909, 7920, 7921, 7924, 7927, 7934, 7944, 7950, 7969, 7974, 7984, 8001, 8010, 8014, 8029, 8032, 8039, 8045, 8046, 8055, 8056, 8057, 8068, 8076, 8079, 8084, 8087, 8091, 8092, 8094, 8096, 8112, 8117, 8120, 8127, 8158, 8161, 8175, 8179, 8182, 8187, 8191, 8193, 8194, 8208, 8209, 8210, 8220, 8225, 8228, 8235, 8242, 8251, 8252, 8255, 8270, 8273, 8286, 8294, 8314, 8318, 8328, 8329, 8344, 8351, 8368, 8387, 8390, 8398, 8414, 8415, 8418, 8433, 8438, 8439, 8450, 8454, 8461, 8467, 8479, 8484, 8489, 8499, 8503, 8511, 8515, 8517, 8520, 8526, 8527, 8534, 8536, 8545, 8546, 8553, 8567, 8568, 8577, 8579, 8585, 8592, 8601, 8603, 8607, 8610, 8625, 8630, 8637, 8639, 8641, 8652, 8657, 8667, 8673, 8678, 8680, 8688, 8698, 8704, 8706, 8707, 8728, 8733, 8737, 8740, 8761, 8783, 8793, 8800, 8806, 8809, 8815, 8820, 8823, 8834, 8839, 8840, 8846, 8847, 8852, 8856, 8861, 8862, 8864, 8865, 8874, 8881, 8882, 8884, 8895, 8899, 8900, 8908, 8929, 8936, 8937, 8944, 8947, 8952, 8959, 8961, 8968, 8971, 8972, 8975, 8978, 8984, 8997, 8999, 9008, 9010, 9023, 9025, 9039, 9051, 9061, 9076, 9080, 9082, 9086, 9091, 9093, 9095, 9101, 9105, 9107, 9108, 9114, 9123, 9125, 9133, 9141, 9150, 9159, 9163, 9177, 9183, 9189, 9201, 9205, 9214, 9230, 9235, 9238, 9250, 9275, 9279, 9285, 9287, 9296, 9297, 9310, 9316, 9318, 9322, 9325, 9342, 9345, 9351, 9356, 9382, 9388, 9395, 9396, 9399, 9403, 9415, 9421, 9424, 9426, 9427, 9428, 9429, 9431, 9433, 9438, 9440, 9442, 9445, 9459, 9460, 9480, 9481, 9494, 9503, 9509, 9512, 9515, 9517, 9523, 9526, 9535, 9536, 9538, 9540, 9542, 9546, 9550, 9568, 9575, 9582, 9602, 9605, 9606, 9608, 9609, 9615, 9625, 9634, 9635, 9640, 9641, 9642, 9651, 9668, 9669, 9685, 9689, 9708, 9720, 9725, 9729, 9744, 9746, 9753, 9762, 9764, 9765, 9767, 9774, 9782, 9785, 9799, 9812, 9822, 9828, 9832, 9841, 9843, 9845, 9850, 9851, 9870, 9872, 9876, 9881, 9884, 9902, 9904, 9905, 9911, 9920, 9921, 9923, 9926, 9934, 9937, 9943, 9950, 9963, 9996, 9997, 9999, 10001, 10002, 10010, 10016, 10027, 10029, 10030, 10046, 10047, 10052, 10071, 10072, 10084, 10085, 10087, 10088, 10095, 10098, 10115, 10130, 10136, 10141, 10146, 10147, 10151, 10152, 10155, 10161, 10167, 10176, 10186, 10188, 10189, 10193, 10194, 10197, 10202, 10205, 10207, 10217, 10219, 10235, 10241, 10262, 10268, 10271, 10272, 10282, 10297, 10302, 10305, 10310, 10319, 10322, 10326, 10329, 10333, 10342, 10347, 10350, 10365, 10368, 10375, 10378, 10379, 10386, 10391, 10398, 10407, 10410, 10412, 10415, 10417, 10419, 10421, 10424, 10428, 10433, 10434, 10445, 10446, 10447, 10449, 10460, 10472, 10473, 10479, 10483, 10487, 10492, 10494, 10495, 10496, 10502, 10506, 10508, 10518, 10520, 10523, 10528, 10536, 10541, 10543, 10558, 10559, 10566, 10583, 10585, 10604, 10609, 10615, 10620, 10627, 10628, 10629, 10635, 10645, 10651, 10659, 10660, 10667, 10670, 10671, 10674, 10678, 10689, 10695, 10700, 10701, 10704, 10705, 10706, 10716, 10718, 10730, 10740, 10745, 10748, 10751, 10753, 10761, 10765, 10766, 10768, 10771, 10772, 10774, 10786, 10790, 10791, 10795, 10807, 10831, 10852, 10853, 10854, 10856, 10865, 10873, 10875, 10884, 10900, 10908, 10911, 10932, 10935, 10937, 10945, 10946, 10953, 10957, 10958, 10971, 10978, 10989, 10990, 10991, 11000, 11013, 11014, 11021, 11023, 11026, 11037, 11053, 11054, 11064, 11096, 11097, 11106, 11116, 11119, 11122, 11125, 11135, 11139, 11164, 11176, 11187, 11193, 11194, 11197, 11219, 11231, 11242, 11243, 11247, 11250, 11264, 11269, 11277, 11290, 11291, 11305, 11307, 11314, 11318, 11319, 11340, 11351, 11353, 11355, 11362, 11364, 11366, 11393, 11394, 11396, 11402, 11406, 11414, 11417, 11418, 11422, 11432, 11434, 11435, 11466, 11474, 11496, 11504, 11511, 11512, 11513, 11514, 11521, 11527, 11529, 11530, 11537, 11538, 11541, 11543, 11546, 11550, 11555, 11559, 11564, 11567, 11569, 11570, 11573, 11576, 11585, 11592, 11615, 11641, 11646, 11652, 11655, 11656, 11659, 11670, 11671, 11681, 11691, 11703, 11716, 11722, 11727, 11733, 11741, 11744, 11757, 11760, 11765, 11771, 11776, 11787, 11789, 11798, 11805, 11807, 11816, 11822, 11848, 11851, 11857, 11859, 11864, 11865, 11873, 11882, 11886, 11889, 11891, 11909, 11918, 11949, 11960, 11962, 11964, 11975, 11981, 11994, 11999, 12017, 12018, 12029, 12035, 12037, 12046, 12049, 12057, 12062, 12068, 12093, 12098, 12105, 12108, 12110, 12112, 12119, 12121, 12124, 12127, 12129, 12130, 12141, 12151, 12154, 12155, 12157, 12158, 12159, 12161, 12173, 12188, 12196, 12198, 12203, 12205, 12207, 12209, 12213, 12218, 12238, 12243, 12246, 12256, 12275, 12284, 12297, 12301, 12302, 12310, 12319, 12329, 12334, 12335, 12338, 12350, 12361, 12363, 12373, 12409, 12410, 12411, 12413, 12414, 12419, 12423, 12431, 12433, 12434, 12449, 12451, 12460, 12463, 12469, 12481, 12491, 12496, 12498, 12514, 12522, 12525, 12536, 12541, 12544, 12548, 12549, 12553, 12564, 12580, 12583, 12587, 12588, 12590, 12594, 12597, 12600, 12602, 12609, 12617, 12621, 12629, 12631, 12635, 12652, 12657, 12660, 12666, 12674, 12681, 12687, 12690, 12694, 12698, 12703, 12704, 12708, 12709, 12712, 12717, 12718, 12722, 12729, 12736, 12742, 12743, 12744, 12747, 12751, 12766, 12772, 12778, 12779, 12784, 12786, 12787, 12806, 12808, 12810, 12825, 12830, 12834, 12836, 12845, 12864, 12865, 12866, 12868, 12870, 12874, 12880, 12888, 12894, 12903, 12906, 12909, 12910, 12922, 12930, 12933, 12944, 12947, 12965, 12968, 12969, 12971, 12978, 12979, 12981, 12986, 12991, 12994, 13002, 13005, 13018, 13025, 13028, 13045, 13049, 13050, 13051, 13053, 13056, 13062, 13063, 13067, 13068, 13069, 13076, 13078, 13079, 13089, 13107, 13110, 13111, 13118, 13129, 13137, 13160, 13171, 13174, 13179, 13180, 13181, 13190, 13193, 13194, 13195, 13207, 13220, 13223, 13231, 13234, 13237, 13241, 13249, 13252, 13253, 13270, 13273, 13276, 13287, 13303, 13309, 13312, 13316, 13321, 13325, 13328, 13330, 13333, 13336, 13337, 13354, 13356, 13398, 13401, 13402, 13417, 13418, 13419, 13424, 13427, 13438, 13439, 13447, 13463, 13467, 13468, 13472, 13474, 13487, 13488, 13496, 13499, 13503, 13505, 13509, 13511, 13528, 13535, 13540, 13543, 13548, 13574, 13583, 13586, 13594, 13604, 13608, 13621, 13624, 13628, 13632, 13635, 13636, 13637, 13649, 13674, 13676, 13678, 13684, 13689, 13708, 13711, 13724, 13731, 13737, 13745, 13748, 13755, 13759, 13763, 13770, 13773, 13777, 13786, 13788, 13789, 13796, 13806, 13808, 13823, 13828, 13850, 13857, 13861, 13867, 13876, 13913, 13919, 13938, 13940, 13945, 13948, 13952, 13960, 13961, 13964, 13965, 13966, 13969, 13984, 13998, 14004, 14006, 14012, 14018, 14025, 14036, 14037, 14040, 14050, 14053, 14061, 14067, 14068, 14070, 14090, 14091, 14097, 14098, 14141, 14168, 14171, 14172, 14176, 14181, 14182, 14191, 14193, 14194, 14202, 14205, 14207, 14209, 14211, 14218, 14220, 14224, 14237, 14246, 14262, 14267, 14274, 14283, 14284, 14285, 14295, 14308, 14310, 14311, 14314, 14316, 14323, 14324, 14325, 14326, 14328, 14331, 14336, 14364, 14369, 14373, 14380, 14384, 14395, 14402, 14415, 14419, 14430, 14437, 14440, 14443, 14459, 14464, 14465, 14473, 14475, 14485, 14493, 14494, 14498, 14509, 14516, 14520, 14527, 14529, 14541, 14550, 14560, 14562, 14572, 14579, 14582, 14590, 14599, 14612, 14616, 14619, 14625, 14628, 14631, 14651, 14660, 14679, 14681, 14683, 14687, 14688, 14698, 14701, 14702, 14705, 14707, 14711, 14712, 14725, 14732, 14741, 14742, 14748, 14752, 14755, 14756, 14763, 14786, 14791, 14795, 14797, 14798, 14805, 14809, 14815, 14817, 14821, 14831, 14832, 14833, 14838, 14844, 14852, 14853, 14859, 14860, 14885, 14888, 14889, 14892, 14894, 14903, 14904, 14905, 14907, 14913, 14914, 14918, 14923, 14929, 14931, 14938, 14942, 14943, 14944, 14950, 14957, 14965, 14988, 14989, 14998, 14999, 15009, 15010, 15015, 15017, 15022, 15026, 15032, 15038, 15040, 15044, 15053, 15057, 15058, 15059, 15062, 15063, 15074, 15076, 15078, 15091, 15095, 15098, 15100, 15103, 15108, 15111, 15113, 15115, 15119, 15121, 15134, 15136, 15146, 15148, 15159, 15164, 15166, 15170, 15171, 15181, 15184, 15190, 15227, 15231, 15238, 15239, 15248, 15251, 15252, 15253, 15254, 15260, 15269, 15273, 15275, 15298, 15315, 15321, 15325, 15327, 15329, 15344, 15354, 15357, 15364, 15372, 15375, 15377, 15382, 15389, 15392, 15398, 15400, 15403, 15407, 15408, 15412, 15414, 15415, 15417, 15420, 15430, 15435, 15450, 15461, 15471, 15474, 15475, 15480, 15485, 15486, 15491, 15500, 15511, 15534, 15545, 15568, 15569, 15578, 15586, 15593, 15596, 15597, 15602, 15604, 15605, 15609, 15615, 15627, 15629, 15630, 15635, 15644, 15674, 15680, 15683, 15688, 15691, 15693, 15694, 15698, 15701, 15705, 15717, 15721, 15729, 15732, 15745, 15753, 15755, 15759, 15760, 15763, 15766, 15767, 15771, 15777, 15778, 15779, 15781, 15782, 15789, 15791, 15802, 15804, 15816, 15823, 15829, 15832, 15833, 15835, 15842, 15843, 15849, 15854, 15861, 15869, 15873, 15874, 15876, 15878, 15886, 15887, 15915, 15924, 15928, 15940, 15943, 15944, 15950, 15955, 15956, 15968, 15971, 15980, 15985, 15989, 15995, 16000, 16013, 16015, 16018, 16035, 16037, 16039, 16049, 16054, 16066, 16069, 16074, 16078, 16084, 16085, 16091, 16098, 16100, 16112, 16113, 16117, 16122, 16123, 16135, 16143, 16149, 16152, 16154, 16161, 16163, 16169, 16174, 16175, 16178, 16186, 16196, 16200, 16207, 16213, 16223, 16226, 16232, 16234, 16243, 16252, 16253, 16254, 16263, 16267, 16276, 16283, 16284, 16292, 16297, 16302, 16307, 16311, 16314, 16329, 16335, 16337, 16348, 16350, 16356, 16362, 16364, 16368, 16371, 16375, 16390, 16402, 16408, 16413, 16420, 16425, 16429, 16430, 16432, 16435, 16440, 16446, 16465, 16470, 16476, 16480, 16483, 16488, 16493, 16506, 16507, 16508, 16510, 16511, 16519, 16523, 16529, 16530, 16532, 16535, 16536, 16538, 16545, 16549, 16558, 16563, 16569, 16585, 16592, 16597, 16598, 16600, 16601, 16606, 16623, 16632, 16634, 16652, 16665, 16672, 16673, 16678, 16680, 16683, 16690, 16699, 16712, 16715, 16716, 16720, 16725, 16730, 16752, 16756, 16764, 16769, 16773, 16775, 16782, 16787, 16788, 16790, 16794, 16812, 16814, 16823, 16828, 16833, 16835, 16837, 16842, 16843, 16851, 16853, 16854, 16856, 16861, 16864, 16874, 16876, 16894, 16911, 16913, 16917, 16918, 16927, 16930, 16934, 16941, 16942, 16947, 16970, 16979, 17001, 17003, 17013, 17014, 17021, 17023, 17025, 17028, 17039, 17042, 17059, 17061, 17076, 17077, 17080, 17084, 17090, 17109, 17116, 17121, 17130, 17131, 17136, 17147, 17158, 17165, 17169, 17170, 17174, 17176, 17177, 17178, 17190, 17193, 17198, 17199, 17208, 17224, 17227, 17229, 17230, 17234, 17235, 17238, 17239, 17243, 17244, 17246, 17258, 17268, 17283, 17285, 17289, 17300, 17301, 17303, 17308, 17315, 17317, 17322, 17336, 17337, 17338, 17342, 17367, 17370, 17376, 17378, 17380, 17389, 17390, 17397, 17400, 17409, 17414, 17416, 17417, 17421, 17423, 17434, 17447, 17454, 17471, 17477, 17478, 17511, 17512, 17515, 17523, 17526, 17535, 17536, 17541, 17542, 17549, 17552, 17557, 17569, 17572, 17573, 17577, 17587, 17588, 17600, 17605, 17614, 17620, 17623, 17624, 17631, 17641, 17643, 17645, 17654, 17660, 17671, 17682, 17685, 17692, 17700, 17710, 17716, 17723, 17733, 17735, 17739, 17742, 17744, 17756, 17759, 17769, 17771, 17776, 17777, 17793, 17796, 17807, 17821, 17829, 17840, 17858, 17860, 17869, 17873, 17881, 17892, 17894, 17897, 17909, 17911, 17917, 17923, 17926, 17929, 17943, 17946, 17948, 17953, 17955, 17964, 17979, 17981, 17984] 2869
image_index = 40
plt.imshow(X_test[image_index].reshape(32, 32),cmap='Greys')
pred = model.predict(X_test[image_index].reshape(-1, 1024))
print("Was predicted ",pred.argmax())
print("Was labeled ",y_test[image_index])
print("Predicted Probabilities: ",pred)
Was predicted 0 Was labeled 6 Predicted Probabilities: [[0.2978457 0.11034926 0.0696594 0.04331229 0.04450411 0.0751532 0.15464994 0.05441099 0.10711899 0.04299614]]
Observation:
D. Plot the training loss, validation loss vs number of epochs and training accuracy, validation accuracy vs number of epochs plot and write your observations on the same. [
Loss Curve
plt.plot(history.history['loss'])
plt.plot( history.history['val_loss'])
plt.title('model Loss')
plt.ylabel('Loss')
plt.xlabel('epoch')
plt.legend(['training loss', 'validation loss'], loc='best')
plt.show()
Observations:
Accuracy Curve
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
Obserations:
Insights: